Jan 28 15:01:21 crc systemd[1]: Starting Kubernetes Kubelet... Jan 28 15:01:21 crc restorecon[4694]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:21 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:01:22 crc restorecon[4694]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:01:22 crc restorecon[4694]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 28 15:01:22 crc kubenswrapper[4893]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:01:22 crc kubenswrapper[4893]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 28 15:01:22 crc kubenswrapper[4893]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:01:22 crc kubenswrapper[4893]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:01:22 crc kubenswrapper[4893]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 28 15:01:22 crc kubenswrapper[4893]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.661323 4893 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664209 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664228 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664233 4893 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664237 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664241 4893 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664245 4893 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664249 4893 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664254 4893 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664258 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664262 4893 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664266 4893 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664269 4893 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664273 4893 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664276 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664280 4893 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664284 4893 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664288 4893 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664291 4893 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664295 4893 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664298 4893 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664302 4893 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664307 4893 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664312 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664315 4893 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664319 4893 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664323 4893 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664327 4893 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664330 4893 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664335 4893 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664339 4893 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664343 4893 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664347 4893 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664357 4893 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664360 4893 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664365 4893 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664369 4893 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664374 4893 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664378 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664381 4893 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664385 4893 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664388 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664392 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664396 4893 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664399 4893 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664403 4893 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664406 4893 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664410 4893 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664413 4893 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664416 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664420 4893 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664423 4893 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664427 4893 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664430 4893 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664434 4893 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664437 4893 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664441 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664444 4893 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664448 4893 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664451 4893 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664455 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664459 4893 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664465 4893 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664490 4893 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664496 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664499 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664503 4893 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664506 4893 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664509 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664519 4893 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664523 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.664526 4893 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665335 4893 flags.go:64] FLAG: --address="0.0.0.0" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665347 4893 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665358 4893 flags.go:64] FLAG: --anonymous-auth="true" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665365 4893 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665371 4893 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665375 4893 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665381 4893 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665387 4893 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665391 4893 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665396 4893 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665400 4893 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665405 4893 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665410 4893 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665415 4893 flags.go:64] FLAG: --cgroup-root="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665419 4893 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665423 4893 flags.go:64] FLAG: --client-ca-file="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665428 4893 flags.go:64] FLAG: --cloud-config="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665433 4893 flags.go:64] FLAG: --cloud-provider="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665437 4893 flags.go:64] FLAG: --cluster-dns="[]" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665446 4893 flags.go:64] FLAG: --cluster-domain="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665450 4893 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665454 4893 flags.go:64] FLAG: --config-dir="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665458 4893 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665462 4893 flags.go:64] FLAG: --container-log-max-files="5" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665484 4893 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665489 4893 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665493 4893 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665497 4893 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665501 4893 flags.go:64] FLAG: --contention-profiling="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665505 4893 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665509 4893 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665513 4893 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665517 4893 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665533 4893 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665538 4893 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665542 4893 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665546 4893 flags.go:64] FLAG: --enable-load-reader="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665551 4893 flags.go:64] FLAG: --enable-server="true" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665554 4893 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665562 4893 flags.go:64] FLAG: --event-burst="100" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665568 4893 flags.go:64] FLAG: --event-qps="50" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665572 4893 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665576 4893 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665580 4893 flags.go:64] FLAG: --eviction-hard="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665586 4893 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665590 4893 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665594 4893 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665598 4893 flags.go:64] FLAG: --eviction-soft="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665602 4893 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665606 4893 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665610 4893 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665614 4893 flags.go:64] FLAG: --experimental-mounter-path="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665618 4893 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665621 4893 flags.go:64] FLAG: --fail-swap-on="true" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665626 4893 flags.go:64] FLAG: --feature-gates="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665631 4893 flags.go:64] FLAG: --file-check-frequency="20s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665635 4893 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665639 4893 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665643 4893 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665647 4893 flags.go:64] FLAG: --healthz-port="10248" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665651 4893 flags.go:64] FLAG: --help="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665655 4893 flags.go:64] FLAG: --hostname-override="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665670 4893 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665674 4893 flags.go:64] FLAG: --http-check-frequency="20s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665678 4893 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665682 4893 flags.go:64] FLAG: --image-credential-provider-config="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665686 4893 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665690 4893 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665694 4893 flags.go:64] FLAG: --image-service-endpoint="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665704 4893 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665708 4893 flags.go:64] FLAG: --kube-api-burst="100" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665712 4893 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665717 4893 flags.go:64] FLAG: --kube-api-qps="50" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665721 4893 flags.go:64] FLAG: --kube-reserved="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665725 4893 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665729 4893 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665734 4893 flags.go:64] FLAG: --kubelet-cgroups="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665738 4893 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665742 4893 flags.go:64] FLAG: --lock-file="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665746 4893 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665750 4893 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665754 4893 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665760 4893 flags.go:64] FLAG: --log-json-split-stream="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665764 4893 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665768 4893 flags.go:64] FLAG: --log-text-split-stream="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665772 4893 flags.go:64] FLAG: --logging-format="text" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665776 4893 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665780 4893 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665784 4893 flags.go:64] FLAG: --manifest-url="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665788 4893 flags.go:64] FLAG: --manifest-url-header="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665794 4893 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665798 4893 flags.go:64] FLAG: --max-open-files="1000000" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665803 4893 flags.go:64] FLAG: --max-pods="110" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665807 4893 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665811 4893 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665815 4893 flags.go:64] FLAG: --memory-manager-policy="None" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665820 4893 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665824 4893 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665828 4893 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665832 4893 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665842 4893 flags.go:64] FLAG: --node-status-max-images="50" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665846 4893 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665851 4893 flags.go:64] FLAG: --oom-score-adj="-999" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665855 4893 flags.go:64] FLAG: --pod-cidr="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665860 4893 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665872 4893 flags.go:64] FLAG: --pod-manifest-path="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665876 4893 flags.go:64] FLAG: --pod-max-pids="-1" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665880 4893 flags.go:64] FLAG: --pods-per-core="0" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665884 4893 flags.go:64] FLAG: --port="10250" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665888 4893 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665892 4893 flags.go:64] FLAG: --provider-id="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665896 4893 flags.go:64] FLAG: --qos-reserved="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665900 4893 flags.go:64] FLAG: --read-only-port="10255" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665904 4893 flags.go:64] FLAG: --register-node="true" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665908 4893 flags.go:64] FLAG: --register-schedulable="true" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665912 4893 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665919 4893 flags.go:64] FLAG: --registry-burst="10" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665923 4893 flags.go:64] FLAG: --registry-qps="5" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665927 4893 flags.go:64] FLAG: --reserved-cpus="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665931 4893 flags.go:64] FLAG: --reserved-memory="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665936 4893 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665941 4893 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665945 4893 flags.go:64] FLAG: --rotate-certificates="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665949 4893 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665953 4893 flags.go:64] FLAG: --runonce="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665957 4893 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665962 4893 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665967 4893 flags.go:64] FLAG: --seccomp-default="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665971 4893 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665976 4893 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665981 4893 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665986 4893 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665992 4893 flags.go:64] FLAG: --storage-driver-password="root" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.665997 4893 flags.go:64] FLAG: --storage-driver-secure="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666002 4893 flags.go:64] FLAG: --storage-driver-table="stats" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666006 4893 flags.go:64] FLAG: --storage-driver-user="root" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666011 4893 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666017 4893 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666022 4893 flags.go:64] FLAG: --system-cgroups="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666026 4893 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666032 4893 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666043 4893 flags.go:64] FLAG: --tls-cert-file="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666048 4893 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666056 4893 flags.go:64] FLAG: --tls-min-version="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666060 4893 flags.go:64] FLAG: --tls-private-key-file="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666064 4893 flags.go:64] FLAG: --topology-manager-policy="none" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666071 4893 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666076 4893 flags.go:64] FLAG: --topology-manager-scope="container" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666081 4893 flags.go:64] FLAG: --v="2" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666089 4893 flags.go:64] FLAG: --version="false" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666096 4893 flags.go:64] FLAG: --vmodule="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666102 4893 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.666108 4893 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666255 4893 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666263 4893 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666268 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666272 4893 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666278 4893 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666282 4893 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666287 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666291 4893 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666297 4893 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666302 4893 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666307 4893 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666312 4893 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666316 4893 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666321 4893 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666327 4893 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666333 4893 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666341 4893 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666346 4893 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666350 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666355 4893 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666360 4893 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666366 4893 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666371 4893 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666376 4893 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666387 4893 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666391 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666395 4893 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666399 4893 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666403 4893 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666406 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666410 4893 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666413 4893 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666417 4893 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666420 4893 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666424 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666427 4893 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666431 4893 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666434 4893 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666437 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666441 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666444 4893 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666448 4893 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666451 4893 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666457 4893 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666461 4893 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666464 4893 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666485 4893 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666490 4893 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666495 4893 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666499 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666503 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666507 4893 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666511 4893 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666515 4893 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666519 4893 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666523 4893 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666526 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666530 4893 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666533 4893 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666537 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666547 4893 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666550 4893 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666554 4893 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666557 4893 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666561 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666564 4893 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666568 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666571 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666575 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666578 4893 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.666581 4893 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.667544 4893 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.682254 4893 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.682305 4893 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682377 4893 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682385 4893 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682390 4893 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682395 4893 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682400 4893 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682405 4893 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682409 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682413 4893 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682416 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682420 4893 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682424 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682428 4893 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682433 4893 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682440 4893 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682446 4893 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682450 4893 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682454 4893 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682458 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682462 4893 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682466 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682483 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682488 4893 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682492 4893 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682496 4893 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682500 4893 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682504 4893 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682507 4893 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682511 4893 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682514 4893 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682518 4893 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682523 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682528 4893 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682531 4893 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682535 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682538 4893 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682542 4893 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682546 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682549 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682552 4893 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682556 4893 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682559 4893 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682563 4893 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682566 4893 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682570 4893 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682573 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682577 4893 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682581 4893 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682584 4893 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682588 4893 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682591 4893 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682595 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682598 4893 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682602 4893 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682605 4893 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682609 4893 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682612 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682615 4893 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682619 4893 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682622 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682626 4893 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682629 4893 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682633 4893 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682636 4893 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682641 4893 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682646 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682650 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682653 4893 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682658 4893 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682662 4893 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682666 4893 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682671 4893 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.682679 4893 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682799 4893 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682807 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682812 4893 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682817 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682821 4893 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682825 4893 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682831 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682835 4893 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682840 4893 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682844 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682848 4893 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682852 4893 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682856 4893 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682860 4893 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682865 4893 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682869 4893 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682906 4893 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682914 4893 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682918 4893 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682923 4893 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682927 4893 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682932 4893 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682936 4893 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682940 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682964 4893 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682969 4893 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682973 4893 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682977 4893 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682980 4893 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682985 4893 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682988 4893 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682993 4893 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.682997 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683001 4893 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683005 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683009 4893 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683013 4893 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683016 4893 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683020 4893 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683024 4893 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683028 4893 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683032 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683036 4893 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683040 4893 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683045 4893 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683048 4893 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683052 4893 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683057 4893 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683060 4893 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683064 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683068 4893 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683072 4893 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683076 4893 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683080 4893 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683083 4893 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683088 4893 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683094 4893 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683099 4893 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683103 4893 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683107 4893 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683111 4893 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683116 4893 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683120 4893 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683123 4893 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683127 4893 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683131 4893 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683134 4893 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683139 4893 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683143 4893 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683147 4893 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.683151 4893 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.683157 4893 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.683945 4893 server.go:940] "Client rotation is on, will bootstrap in background" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.689608 4893 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.689720 4893 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.691100 4893 server.go:997] "Starting client certificate rotation" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.691135 4893 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.692127 4893 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-12 13:54:30.646871405 +0000 UTC Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.692181 4893 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.714563 4893 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.716846 4893 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:01:22 crc kubenswrapper[4893]: E0128 15:01:22.717668 4893 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.740637 4893 log.go:25] "Validated CRI v1 runtime API" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.775879 4893 log.go:25] "Validated CRI v1 image API" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.777744 4893 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.782450 4893 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-28-14-57-25-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.782528 4893 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.800097 4893 manager.go:217] Machine: {Timestamp:2026-01-28 15:01:22.797886514 +0000 UTC m=+0.571501552 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:229bc78e-0037-4fd6-b24e-ff333227d169 BootID:a030eed1-afa1-4d30-ad93-dc087f4d77df Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:17:e4:0b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:17:e4:0b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:66:42:a6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b5:a4:c9 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ae:5c:f4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ed:6b:14 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:be:aa:e2:6a:be:ca Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:fe:f7:ed:9e:6f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.800307 4893 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.800438 4893 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.800691 4893 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.800830 4893 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.800854 4893 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.801015 4893 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.801025 4893 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.801541 4893 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.801569 4893 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.802155 4893 state_mem.go:36] "Initialized new in-memory state store" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.802237 4893 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.807226 4893 kubelet.go:418] "Attempting to sync node with API server" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.807251 4893 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.807292 4893 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.807306 4893 kubelet.go:324] "Adding apiserver pod source" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.807318 4893 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.810889 4893 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.811676 4893 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.813728 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.813753 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:22 crc kubenswrapper[4893]: E0128 15:01:22.813873 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:01:22 crc kubenswrapper[4893]: E0128 15:01:22.813883 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.814085 4893 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.815543 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.815627 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.815685 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.815734 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.815790 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.815838 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.815895 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.815953 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.816004 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.816058 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.816118 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.816170 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.818236 4893 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.818976 4893 server.go:1280] "Started kubelet" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.819350 4893 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.819724 4893 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.820578 4893 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 15:01:22 crc systemd[1]: Started Kubernetes Kubelet. Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.821172 4893 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.822125 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.822177 4893 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.822322 4893 server.go:460] "Adding debug handlers to kubelet server" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.822333 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:04:54.376077739 +0000 UTC Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.827554 4893 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.827571 4893 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.827628 4893 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 15:01:22 crc kubenswrapper[4893]: E0128 15:01:22.827944 4893 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.829062 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:22 crc kubenswrapper[4893]: E0128 15:01:22.829185 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:01:22 crc kubenswrapper[4893]: E0128 15:01:22.831653 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="200ms" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.832067 4893 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.832166 4893 factory.go:55] Registering systemd factory Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.832237 4893 factory.go:221] Registration of the systemd container factory successfully Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.835783 4893 factory.go:153] Registering CRI-O factory Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.835837 4893 factory.go:221] Registration of the crio container factory successfully Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.835869 4893 factory.go:103] Registering Raw factory Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.835883 4893 manager.go:1196] Started watching for new ooms in manager Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.837146 4893 manager.go:319] Starting recovery of all containers Jan 28 15:01:22 crc kubenswrapper[4893]: E0128 15:01:22.836272 4893 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188eed2d64852fe6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:01:22.818944998 +0000 UTC m=+0.592560026,LastTimestamp:2026-01-28 15:01:22.818944998 +0000 UTC m=+0.592560026,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.845731 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.845811 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.845836 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.845859 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.845881 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.845902 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.845921 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.845940 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.845963 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.845984 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846004 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846024 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846043 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846107 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846140 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846165 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846185 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846204 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846223 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846240 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846260 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846278 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846298 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846320 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846342 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846361 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846385 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846410 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846430 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846449 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846496 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846528 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846550 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846602 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846622 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846689 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846734 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846760 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846780 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846799 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846820 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846844 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846863 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846884 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846902 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846921 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846941 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846963 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.846981 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847003 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847025 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847047 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847117 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847144 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847166 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847187 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847211 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847231 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847251 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847273 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847296 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847315 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847334 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847355 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847374 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847404 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847422 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847442 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847463 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847511 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.847535 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.851673 4893 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.851823 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.851938 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.852052 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.852142 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.852246 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.852330 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.852445 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.852559 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.852650 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.852758 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.852854 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.852951 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.853047 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.853149 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.853242 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.853326 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.852343 4893 manager.go:324] Recovery completed Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.853424 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.853592 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.853653 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.853706 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.853766 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.853826 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.853888 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.853964 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.854038 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.854111 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.854185 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.854271 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.854350 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.854424 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.854546 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.854639 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.854768 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.854865 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.854951 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.855042 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.855124 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.855213 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.855297 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.855385 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.855491 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.855582 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.855678 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.855772 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.855860 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.855945 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.856027 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.856125 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.856214 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.856294 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.856376 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.856456 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.856568 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.856650 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.856735 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.856820 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.856899 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.856989 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.857091 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.857171 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.857266 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.857325 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.857404 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.857508 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.857594 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.857673 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.857748 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.857811 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.857879 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.857952 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.858024 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.858084 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.858143 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.858289 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.858406 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.858501 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.858573 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.858630 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.858719 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.858781 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.858841 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.858899 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.858957 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.859012 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.859065 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.859138 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.859221 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.859297 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.859372 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.859429 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.859691 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.859786 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.859921 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.860011 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.860071 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.860149 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.860239 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.860312 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.860373 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.860439 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.860531 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.860602 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.860666 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.860735 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.860811 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.860885 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.860968 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.861042 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.861126 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.861200 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.861276 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.861353 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.861425 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.861517 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.861607 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.861695 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.861776 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.861854 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.861915 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.861974 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.862281 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.862382 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.862490 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.862601 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.864735 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.865160 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.865184 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.865207 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.865221 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.865256 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.865270 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.865281 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.865297 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.865310 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.865324 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.865337 4893 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.865346 4893 reconstruct.go:97] "Volume reconstruction finished" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.865354 4893 reconciler.go:26] "Reconciler: start to sync state" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.869177 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.871232 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.871277 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.871288 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.873229 4893 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.873248 4893 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.873270 4893 state_mem.go:36] "Initialized new in-memory state store" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.887555 4893 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.887669 4893 policy_none.go:49] "None policy: Start" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.889100 4893 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.889191 4893 state_mem.go:35] "Initializing new in-memory state store" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.890268 4893 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.890370 4893 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.890466 4893 kubelet.go:2335] "Starting kubelet main sync loop" Jan 28 15:01:22 crc kubenswrapper[4893]: E0128 15:01:22.890648 4893 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 15:01:22 crc kubenswrapper[4893]: W0128 15:01:22.891809 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:22 crc kubenswrapper[4893]: E0128 15:01:22.891901 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:01:22 crc kubenswrapper[4893]: E0128 15:01:22.928300 4893 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.970117 4893 manager.go:334] "Starting Device Plugin manager" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.970178 4893 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.970192 4893 server.go:79] "Starting device plugin registration server" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.970740 4893 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.970763 4893 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.971245 4893 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.971335 4893 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.971344 4893 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 15:01:22 crc kubenswrapper[4893]: E0128 15:01:22.979049 4893 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.990848 4893 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.990973 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.992015 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.992058 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.992068 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.992239 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.993020 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.993047 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.993097 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.993136 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.993148 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.993300 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.993552 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.993584 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.993668 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.993689 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.993697 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.994142 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.994174 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.994187 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.994271 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.994285 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.994294 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.994401 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.994492 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.994518 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995074 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995097 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995121 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995137 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995155 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995237 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995356 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995381 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995850 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995875 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995877 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995902 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995889 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.995974 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.996067 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.996106 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.996666 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.996697 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:22 crc kubenswrapper[4893]: I0128 15:01:22.996708 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:23 crc kubenswrapper[4893]: E0128 15:01:23.032993 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="400ms" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.068212 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.068274 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.068309 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.068338 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.068362 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.068588 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.068693 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.068742 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.068772 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.068836 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.068952 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.069042 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.069073 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.069178 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.069243 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.070951 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.072430 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.072469 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.072499 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.072534 4893 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:01:23 crc kubenswrapper[4893]: E0128 15:01:23.073278 4893 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170106 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170158 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170183 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170201 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170218 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170235 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170249 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170272 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170274 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170318 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170343 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170357 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170359 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170378 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170372 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170383 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170374 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170290 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170541 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170620 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170630 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170690 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170723 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170662 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170757 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170775 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170790 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170809 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170764 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.170792 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.273810 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.275388 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.275434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.275445 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.275470 4893 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:01:23 crc kubenswrapper[4893]: E0128 15:01:23.275901 4893 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.331990 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.342999 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.355050 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.380706 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: W0128 15:01:23.382251 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1cf6f32c9b39f23040e51c90764b929466c2352ad2bb0691a257e5975420a87d WatchSource:0}: Error finding container 1cf6f32c9b39f23040e51c90764b929466c2352ad2bb0691a257e5975420a87d: Status 404 returned error can't find the container with id 1cf6f32c9b39f23040e51c90764b929466c2352ad2bb0691a257e5975420a87d Jan 28 15:01:23 crc kubenswrapper[4893]: W0128 15:01:23.384074 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-86219cc69708b30793515619226b35dfa5d405ebaf9f844a291137493c8308af WatchSource:0}: Error finding container 86219cc69708b30793515619226b35dfa5d405ebaf9f844a291137493c8308af: Status 404 returned error can't find the container with id 86219cc69708b30793515619226b35dfa5d405ebaf9f844a291137493c8308af Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.387638 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:23 crc kubenswrapper[4893]: E0128 15:01:23.434756 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="800ms" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.676432 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.677668 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.677696 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.677707 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.677731 4893 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:01:23 crc kubenswrapper[4893]: E0128 15:01:23.678025 4893 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.822075 4893 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.826290 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:00:14.498654123 +0000 UTC Jan 28 15:01:23 crc kubenswrapper[4893]: W0128 15:01:23.884639 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:23 crc kubenswrapper[4893]: E0128 15:01:23.884706 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.895374 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"86219cc69708b30793515619226b35dfa5d405ebaf9f844a291137493c8308af"} Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.896303 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1cf6f32c9b39f23040e51c90764b929466c2352ad2bb0691a257e5975420a87d"} Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.897660 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3cb8f8182e04ac83bc63a18661e1358afaf0e653ce8b6596e98c68a2b6f8d9c1"} Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.898628 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"30f5aec3cd15ebeca3a60eb96e74063e51d1da34c56872935d76fd7e95934713"} Jan 28 15:01:23 crc kubenswrapper[4893]: I0128 15:01:23.899523 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d6093c86292fe1722c23926946ccf6e0d43cafb8d3ec5ad26474221da42ec622"} Jan 28 15:01:23 crc kubenswrapper[4893]: W0128 15:01:23.984628 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:23 crc kubenswrapper[4893]: E0128 15:01:23.985012 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:01:24 crc kubenswrapper[4893]: W0128 15:01:24.064444 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:24 crc kubenswrapper[4893]: E0128 15:01:24.064552 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:01:24 crc kubenswrapper[4893]: E0128 15:01:24.236672 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="1.6s" Jan 28 15:01:24 crc kubenswrapper[4893]: W0128 15:01:24.325088 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:24 crc kubenswrapper[4893]: E0128 15:01:24.325208 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.478699 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.479944 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.479971 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.479979 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.480002 4893 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:01:24 crc kubenswrapper[4893]: E0128 15:01:24.480446 4893 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.822159 4893 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.827318 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 09:33:35.379362642 +0000 UTC Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.828499 4893 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:01:24 crc kubenswrapper[4893]: E0128 15:01:24.829784 4893 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.906214 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d" exitCode=0 Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.906329 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d"} Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.906393 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.907534 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.907561 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.907573 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.908468 4893 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c4b6324f5deb306054f5d11767e02171b3c93963af9b99e8e12aca0fe8e5b1d2" exitCode=0 Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.908567 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c4b6324f5deb306054f5d11767e02171b3c93963af9b99e8e12aca0fe8e5b1d2"} Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.908601 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.909175 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.909810 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.909860 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.909880 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.909905 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.909927 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.909939 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.911285 4893 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="fb64bd87594c2eeafd35a5ef9af465828f0a815f129f0d2d5e5d70eb59a0123b" exitCode=0 Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.911348 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.911355 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"fb64bd87594c2eeafd35a5ef9af465828f0a815f129f0d2d5e5d70eb59a0123b"} Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.912199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.912237 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.912248 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.913759 4893 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b" exitCode=0 Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.913825 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b"} Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.913957 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.915080 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.915166 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.915237 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.917519 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74"} Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.917584 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540"} Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.917624 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1"} Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.917644 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b"} Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.917588 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.918782 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.918871 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:24 crc kubenswrapper[4893]: I0128 15:01:24.918935 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.823115 4893 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.827637 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:49:39.430739543 +0000 UTC Jan 28 15:01:25 crc kubenswrapper[4893]: E0128 15:01:25.837453 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="3.2s" Jan 28 15:01:25 crc kubenswrapper[4893]: W0128 15:01:25.919174 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:25 crc kubenswrapper[4893]: E0128 15:01:25.919272 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.928297 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb"} Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.928367 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568"} Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.928385 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d"} Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.928399 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6"} Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.930560 4893 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="76438a49eeb93884dfb50594be34601b0a2f215d7cf6a4357b42dd76517bf599" exitCode=0 Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.930641 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"76438a49eeb93884dfb50594be34601b0a2f215d7cf6a4357b42dd76517bf599"} Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.930705 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.932105 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.932183 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.932196 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.933402 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5af5caf464fa918b73aae723df4c986b4de947d1e9dca38c3363c88b0aeab84a"} Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.933422 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.934532 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.934552 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.934562 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.936715 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d"} Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.936755 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c"} Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.936767 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076"} Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.936778 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.936791 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.938847 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.938894 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.938914 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.939003 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.939047 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:25 crc kubenswrapper[4893]: I0128 15:01:25.939059 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:26 crc kubenswrapper[4893]: W0128 15:01:26.001632 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:26 crc kubenswrapper[4893]: E0128 15:01:26.001733 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.081311 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.082678 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.082776 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.082786 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.082834 4893 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:01:26 crc kubenswrapper[4893]: E0128 15:01:26.083461 4893 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 28 15:01:26 crc kubenswrapper[4893]: W0128 15:01:26.551117 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 28 15:01:26 crc kubenswrapper[4893]: E0128 15:01:26.551197 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.828245 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:17:41.092199064 +0000 UTC Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.943569 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fd6cfaa7c19bafc9f2187d3594841df295db09b64e3ae8ceb519950f3f8aab6b"} Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.943675 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.944938 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.944996 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.945019 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.946867 4893 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ed28ff4d8747036a3cbd04976c94b09a39e6de26a1ec019ac1c117e11144d275" exitCode=0 Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.946984 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.947042 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.947755 4893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.947755 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ed28ff4d8747036a3cbd04976c94b09a39e6de26a1ec019ac1c117e11144d275"} Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.947802 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.950613 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.950661 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.950674 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.951275 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.951381 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.951404 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.951789 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.951820 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:26 crc kubenswrapper[4893]: I0128 15:01:26.951838 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.829388 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:34:06.882319033 +0000 UTC Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.956835 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1c5b098072ae8786aec1e9b634d91d42b268bcf0d4088469bf67624ca6303e65"} Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.956894 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a405256fbb54ea23bc63301a6ec47d4c95b136f4539ec75e6b3dc63d3b816885"} Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.956909 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"12a1a9d9b9431411c444116a3767697389bb07a7c7a6029f5f2da7845820e01a"} Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.956912 4893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.956972 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.957015 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.956920 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"21285509dbb2f59833f11569ed63e61412060a1abff27f0650603553139d4b9e"} Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.957334 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5193a27a1241562d752724880c56d180111ec04ea86bfe3c319e885dfd263273"} Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.958354 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.958404 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.958422 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.958653 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.958698 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:27 crc kubenswrapper[4893]: I0128 15:01:27.958713 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:28 crc kubenswrapper[4893]: I0128 15:01:28.119285 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:28 crc kubenswrapper[4893]: I0128 15:01:28.601139 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 28 15:01:28 crc kubenswrapper[4893]: I0128 15:01:28.830446 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:29:50.844213024 +0000 UTC Jan 28 15:01:28 crc kubenswrapper[4893]: I0128 15:01:28.960412 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:28 crc kubenswrapper[4893]: I0128 15:01:28.960434 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:28 crc kubenswrapper[4893]: I0128 15:01:28.961918 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:28 crc kubenswrapper[4893]: I0128 15:01:28.961969 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:28 crc kubenswrapper[4893]: I0128 15:01:28.961988 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:28 crc kubenswrapper[4893]: I0128 15:01:28.962063 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:28 crc kubenswrapper[4893]: I0128 15:01:28.962099 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:28 crc kubenswrapper[4893]: I0128 15:01:28.962116 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:28 crc kubenswrapper[4893]: I0128 15:01:28.976245 4893 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.284649 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.286255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.286304 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.286318 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.286350 4893 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.395966 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.567948 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.568255 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.570028 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.570104 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.570140 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.831603 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:10:57.710310523 +0000 UTC Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.963267 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.964603 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.964690 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:29 crc kubenswrapper[4893]: I0128 15:01:29.964728 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.205299 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.205743 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.207693 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.207759 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.207792 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.652576 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.652881 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.657247 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.657300 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.657313 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.832748 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 22:11:18.845191258 +0000 UTC Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.885984 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.966001 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.966093 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.966832 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.966867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.966877 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.967326 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.967369 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:30 crc kubenswrapper[4893]: I0128 15:01:30.967382 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:31 crc kubenswrapper[4893]: I0128 15:01:31.519526 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:31 crc kubenswrapper[4893]: I0128 15:01:31.519808 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:31 crc kubenswrapper[4893]: I0128 15:01:31.521535 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:31 crc kubenswrapper[4893]: I0128 15:01:31.521617 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:31 crc kubenswrapper[4893]: I0128 15:01:31.521646 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:31 crc kubenswrapper[4893]: I0128 15:01:31.833740 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:40:02.843952897 +0000 UTC Jan 28 15:01:32 crc kubenswrapper[4893]: I0128 15:01:32.834234 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:58:52.054983336 +0000 UTC Jan 28 15:01:32 crc kubenswrapper[4893]: I0128 15:01:32.951012 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:32 crc kubenswrapper[4893]: I0128 15:01:32.951320 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:32 crc kubenswrapper[4893]: I0128 15:01:32.953761 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:32 crc kubenswrapper[4893]: I0128 15:01:32.953828 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:32 crc kubenswrapper[4893]: I0128 15:01:32.953845 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:32 crc kubenswrapper[4893]: I0128 15:01:32.957094 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:32 crc kubenswrapper[4893]: I0128 15:01:32.971732 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:32 crc kubenswrapper[4893]: I0128 15:01:32.973216 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:32 crc kubenswrapper[4893]: I0128 15:01:32.973246 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:32 crc kubenswrapper[4893]: I0128 15:01:32.973257 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:32 crc kubenswrapper[4893]: E0128 15:01:32.979303 4893 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 15:01:33 crc kubenswrapper[4893]: I0128 15:01:33.834587 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 06:06:04.57221354 +0000 UTC Jan 28 15:01:34 crc kubenswrapper[4893]: I0128 15:01:34.835364 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:56:44.914594591 +0000 UTC Jan 28 15:01:34 crc kubenswrapper[4893]: I0128 15:01:34.917232 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:34 crc kubenswrapper[4893]: I0128 15:01:34.917434 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:34 crc kubenswrapper[4893]: I0128 15:01:34.919049 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:34 crc kubenswrapper[4893]: I0128 15:01:34.919096 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:34 crc kubenswrapper[4893]: I0128 15:01:34.919110 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:34 crc kubenswrapper[4893]: I0128 15:01:34.924785 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:34 crc kubenswrapper[4893]: I0128 15:01:34.978117 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:34 crc kubenswrapper[4893]: I0128 15:01:34.979774 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:34 crc kubenswrapper[4893]: I0128 15:01:34.979853 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:34 crc kubenswrapper[4893]: I0128 15:01:34.979869 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:35 crc kubenswrapper[4893]: I0128 15:01:35.835801 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:08:21.527947343 +0000 UTC Jan 28 15:01:36 crc kubenswrapper[4893]: I0128 15:01:36.798037 4893 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 28 15:01:36 crc kubenswrapper[4893]: I0128 15:01:36.798096 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 28 15:01:36 crc kubenswrapper[4893]: I0128 15:01:36.823070 4893 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 28 15:01:36 crc kubenswrapper[4893]: I0128 15:01:36.836293 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:25:52.988818628 +0000 UTC Jan 28 15:01:36 crc kubenswrapper[4893]: I0128 15:01:36.986062 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:01:36 crc kubenswrapper[4893]: I0128 15:01:36.987503 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fd6cfaa7c19bafc9f2187d3594841df295db09b64e3ae8ceb519950f3f8aab6b" exitCode=255 Jan 28 15:01:36 crc kubenswrapper[4893]: I0128 15:01:36.987572 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fd6cfaa7c19bafc9f2187d3594841df295db09b64e3ae8ceb519950f3f8aab6b"} Jan 28 15:01:36 crc kubenswrapper[4893]: I0128 15:01:36.987760 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:36 crc kubenswrapper[4893]: I0128 15:01:36.988551 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:36 crc kubenswrapper[4893]: I0128 15:01:36.988603 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:36 crc kubenswrapper[4893]: I0128 15:01:36.988619 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:36 crc kubenswrapper[4893]: I0128 15:01:36.989375 4893 scope.go:117] "RemoveContainer" containerID="fd6cfaa7c19bafc9f2187d3594841df295db09b64e3ae8ceb519950f3f8aab6b" Jan 28 15:01:37 crc kubenswrapper[4893]: W0128 15:01:37.126234 4893 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 15:01:37 crc kubenswrapper[4893]: I0128 15:01:37.126352 4893 trace.go:236] Trace[1640414533]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:01:27.125) (total time: 10001ms): Jan 28 15:01:37 crc kubenswrapper[4893]: Trace[1640414533]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (15:01:37.126) Jan 28 15:01:37 crc kubenswrapper[4893]: Trace[1640414533]: [10.001099439s] [10.001099439s] END Jan 28 15:01:37 crc kubenswrapper[4893]: E0128 15:01:37.126380 4893 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 15:01:37 crc kubenswrapper[4893]: I0128 15:01:37.192176 4893 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 15:01:37 crc kubenswrapper[4893]: I0128 15:01:37.192272 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 15:01:37 crc kubenswrapper[4893]: I0128 15:01:37.197935 4893 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 15:01:37 crc kubenswrapper[4893]: I0128 15:01:37.198026 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 15:01:37 crc kubenswrapper[4893]: I0128 15:01:37.837363 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:35:37.543903976 +0000 UTC Jan 28 15:01:37 crc kubenswrapper[4893]: I0128 15:01:37.917840 4893 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:01:37 crc kubenswrapper[4893]: I0128 15:01:37.917924 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:01:37 crc kubenswrapper[4893]: I0128 15:01:37.991306 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:01:37 crc kubenswrapper[4893]: I0128 15:01:37.993050 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c"} Jan 28 15:01:37 crc kubenswrapper[4893]: I0128 15:01:37.993263 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:37 crc kubenswrapper[4893]: I0128 15:01:37.994275 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:37 crc kubenswrapper[4893]: I0128 15:01:37.994317 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:37 crc kubenswrapper[4893]: I0128 15:01:37.994332 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:38 crc kubenswrapper[4893]: I0128 15:01:38.120154 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:38 crc kubenswrapper[4893]: I0128 15:01:38.837818 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 02:35:08.007715131 +0000 UTC Jan 28 15:01:38 crc kubenswrapper[4893]: I0128 15:01:38.995503 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:38 crc kubenswrapper[4893]: I0128 15:01:38.996598 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:38 crc kubenswrapper[4893]: I0128 15:01:38.996644 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:38 crc kubenswrapper[4893]: I0128 15:01:38.996655 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:39 crc kubenswrapper[4893]: I0128 15:01:39.436441 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 28 15:01:39 crc kubenswrapper[4893]: I0128 15:01:39.436732 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:39 crc kubenswrapper[4893]: I0128 15:01:39.438154 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:39 crc kubenswrapper[4893]: I0128 15:01:39.438186 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:39 crc kubenswrapper[4893]: I0128 15:01:39.438201 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:39 crc kubenswrapper[4893]: I0128 15:01:39.450853 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 28 15:01:39 crc kubenswrapper[4893]: I0128 15:01:39.838680 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:57:54.966513828 +0000 UTC Jan 28 15:01:39 crc kubenswrapper[4893]: I0128 15:01:39.997942 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:39 crc kubenswrapper[4893]: I0128 15:01:39.998840 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:39 crc kubenswrapper[4893]: I0128 15:01:39.998884 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:39 crc kubenswrapper[4893]: I0128 15:01:39.998896 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:40 crc kubenswrapper[4893]: I0128 15:01:40.659361 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:40 crc kubenswrapper[4893]: I0128 15:01:40.659577 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:40 crc kubenswrapper[4893]: I0128 15:01:40.660724 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:40 crc kubenswrapper[4893]: I0128 15:01:40.660767 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:40 crc kubenswrapper[4893]: I0128 15:01:40.660781 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:40 crc kubenswrapper[4893]: I0128 15:01:40.665015 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:40 crc kubenswrapper[4893]: I0128 15:01:40.839099 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:31:15.064053707 +0000 UTC Jan 28 15:01:41 crc kubenswrapper[4893]: I0128 15:01:41.000933 4893 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:01:41 crc kubenswrapper[4893]: I0128 15:01:41.002038 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:41 crc kubenswrapper[4893]: I0128 15:01:41.002078 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:41 crc kubenswrapper[4893]: I0128 15:01:41.002093 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:41 crc kubenswrapper[4893]: I0128 15:01:41.839952 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:07:38.426622066 +0000 UTC Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.190249 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.195195 4893 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.200884 4893 trace.go:236] Trace[1527139125]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:01:31.372) (total time: 10828ms): Jan 28 15:01:42 crc kubenswrapper[4893]: Trace[1527139125]: ---"Objects listed" error: 10823ms (15:01:42.195) Jan 28 15:01:42 crc kubenswrapper[4893]: Trace[1527139125]: [10.828359411s] [10.828359411s] END Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.200932 4893 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.201673 4893 trace.go:236] Trace[2128157053]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:01:30.081) (total time: 12120ms): Jan 28 15:01:42 crc kubenswrapper[4893]: Trace[2128157053]: ---"Objects listed" error: 12120ms (15:01:42.201) Jan 28 15:01:42 crc kubenswrapper[4893]: Trace[2128157053]: [12.12022211s] [12.12022211s] END Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.201696 4893 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.203898 4893 trace.go:236] Trace[1423116372]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:01:32.200) (total time: 10003ms): Jan 28 15:01:42 crc kubenswrapper[4893]: Trace[1423116372]: ---"Objects listed" error: 10003ms (15:01:42.203) Jan 28 15:01:42 crc kubenswrapper[4893]: Trace[1423116372]: [10.003631849s] [10.003631849s] END Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.203918 4893 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.209867 4893 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.216896 4893 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.217316 4893 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.218655 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.218695 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.218708 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.218732 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.218747 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:42Z","lastTransitionTime":"2026-01-28T15:01:42Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.232709 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.233789 4893 csr.go:261] certificate signing request csr-kdccs is approved, waiting to be issued Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.241983 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.242035 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.242049 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.242074 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.242089 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:42Z","lastTransitionTime":"2026-01-28T15:01:42Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.242727 4893 csr.go:257] certificate signing request csr-kdccs is issued Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.251874 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.255500 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.255546 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.255558 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.255579 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.255588 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:42Z","lastTransitionTime":"2026-01-28T15:01:42Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.271894 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.275797 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.275868 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.275883 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.275909 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.275923 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:42Z","lastTransitionTime":"2026-01-28T15:01:42Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.290280 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.294851 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.294886 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.294898 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.294924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.294935 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:42Z","lastTransitionTime":"2026-01-28T15:01:42Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.307624 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.307801 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.309997 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.310036 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.310055 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.310081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.310096 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:42Z","lastTransitionTime":"2026-01-28T15:01:42Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.370019 4893 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.412445 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.412525 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.412539 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.412564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.412585 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:42Z","lastTransitionTime":"2026-01-28T15:01:42Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.514491 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.514525 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.514535 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.514555 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.514565 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:42Z","lastTransitionTime":"2026-01-28T15:01:42Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.617014 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.617055 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.617064 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.617081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.617090 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:42Z","lastTransitionTime":"2026-01-28T15:01:42Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.691820 4893 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 15:01:42 crc kubenswrapper[4893]: W0128 15:01:42.692307 4893 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:01:42 crc kubenswrapper[4893]: W0128 15:01:42.692340 4893 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:01:42 crc kubenswrapper[4893]: W0128 15:01:42.692329 4893 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:01:42 crc kubenswrapper[4893]: W0128 15:01:42.692403 4893 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.719239 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.719287 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.719298 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.719321 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.719335 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:42Z","lastTransitionTime":"2026-01-28T15:01:42Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.819938 4893 apiserver.go:52] "Watching apiserver" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.821435 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.821535 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.821552 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.821573 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.821585 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:42Z","lastTransitionTime":"2026-01-28T15:01:42Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.826810 4893 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.826992 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-hn5qq","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.827283 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.827679 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.827695 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.827721 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.827738 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.827811 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.827902 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.827962 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.828094 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.828170 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hn5qq" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.829572 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.830317 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.830326 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.830554 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.830616 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.831127 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.831226 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.832912 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.833318 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.833457 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.834133 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.835348 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.840138 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 00:08:50.219522574 +0000 UTC Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.854203 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.870142 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.882789 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.895152 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.898881 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.898940 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.898965 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.898990 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.899013 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.899036 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.899065 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5fpz\" (UniqueName: \"kubernetes.io/projected/001ac9ae-35b3-4f82-abaf-1eb6088441e2-kube-api-access-d5fpz\") pod \"node-resolver-hn5qq\" (UID: \"001ac9ae-35b3-4f82-abaf-1eb6088441e2\") " pod="openshift-dns/node-resolver-hn5qq" Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.899081 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.899094 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.899122 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.899176 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:43.399136053 +0000 UTC m=+21.172751081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.899207 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.899254 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.899279 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/001ac9ae-35b3-4f82-abaf-1eb6088441e2-hosts-file\") pod \"node-resolver-hn5qq\" (UID: \"001ac9ae-35b3-4f82-abaf-1eb6088441e2\") " pod="openshift-dns/node-resolver-hn5qq" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.899300 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.899320 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.899339 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.899376 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.899516 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.899559 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:43.399547994 +0000 UTC m=+21.173163022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.900089 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.900284 4893 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.900563 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.901155 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.906682 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.908433 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.917758 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.917825 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.917841 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.917842 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.917966 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:43.41793753 +0000 UTC m=+21.191552558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.920510 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.920544 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.920557 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:42 crc kubenswrapper[4893]: E0128 15:01:42.920657 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:43.420621022 +0000 UTC m=+21.194236050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.921524 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.923775 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.923826 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.923839 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.923856 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.923877 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:42Z","lastTransitionTime":"2026-01-28T15:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.925551 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.926656 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.929057 4893 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.930754 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.942305 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.954246 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.968274 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.986624 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:42 crc kubenswrapper[4893]: I0128 15:01:42.998146 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000103 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000139 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000168 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000184 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000203 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000220 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000270 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000291 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000310 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000328 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000348 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000364 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000381 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000399 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000416 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000432 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000462 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000497 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000529 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000547 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000562 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000579 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000597 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000613 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000628 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000700 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000719 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000737 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000753 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000779 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000799 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000814 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000830 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000845 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000863 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000879 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000894 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000913 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000931 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000951 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000974 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.000993 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001122 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001142 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001178 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001194 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001209 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001223 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001238 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001277 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001292 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001307 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001321 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001347 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001362 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001378 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001406 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001421 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001689 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001708 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001737 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001754 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001770 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001830 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001898 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001915 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001953 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.001987 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002008 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002024 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002038 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002088 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002106 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002121 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002239 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002267 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002289 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002307 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002328 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002350 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002411 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002459 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002504 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002539 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002570 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002602 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002624 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002661 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002708 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002757 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002906 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002949 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.002994 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003020 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003046 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003062 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003079 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003104 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003120 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003136 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003162 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003177 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003197 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003224 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003238 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003263 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003279 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003297 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003312 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003327 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003343 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003358 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003372 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003387 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003404 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003432 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003448 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003465 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003492 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003507 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003523 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003539 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003555 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003571 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003587 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003603 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003617 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003636 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003650 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003668 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003684 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003702 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003719 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003735 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003750 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003766 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003781 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003798 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003818 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003834 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003862 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003879 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003896 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003912 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003927 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003942 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003958 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003976 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.003995 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004013 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004031 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004047 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004062 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004079 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004095 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004112 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004129 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004145 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004161 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004177 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004193 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004211 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004228 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004247 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004277 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004302 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004318 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004334 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004424 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004441 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004459 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004489 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004506 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004527 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004545 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004561 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004577 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004595 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004615 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004632 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004675 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004693 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004709 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004725 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004741 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004758 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004775 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004791 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004807 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004824 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004840 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004861 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004877 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.004893 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.005297 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.005636 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.006013 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.006099 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.006122 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5fpz\" (UniqueName: \"kubernetes.io/projected/001ac9ae-35b3-4f82-abaf-1eb6088441e2-kube-api-access-d5fpz\") pod \"node-resolver-hn5qq\" (UID: \"001ac9ae-35b3-4f82-abaf-1eb6088441e2\") " pod="openshift-dns/node-resolver-hn5qq" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.006141 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.006198 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/001ac9ae-35b3-4f82-abaf-1eb6088441e2-hosts-file\") pod \"node-resolver-hn5qq\" (UID: \"001ac9ae-35b3-4f82-abaf-1eb6088441e2\") " pod="openshift-dns/node-resolver-hn5qq" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.006319 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.006642 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.007365 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.008111 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.008193 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.008552 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.008614 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.008675 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.008793 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.009155 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.009194 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.009018 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.009257 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.009273 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.009546 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.009566 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.009934 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.010178 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.010222 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.010307 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.010456 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.010694 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.010712 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.010905 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.010926 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.010928 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.011112 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.011246 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.011466 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.011548 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.012022 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.011856 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.012072 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.012336 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.012418 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.012430 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.012702 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.012880 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.013036 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.013279 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.013365 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.013583 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.014141 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.014194 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.013860 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.014714 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.015265 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.015436 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.015696 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.015813 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.015979 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.016251 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.017120 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.016968 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.017711 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.017738 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.017869 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.018078 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.018101 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.018156 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.018163 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.016659 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.018385 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.018406 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.018422 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.018509 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.018565 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.018609 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.018906 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.019209 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.019266 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.019693 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.019997 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.020177 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.020196 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.020400 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.020570 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.021125 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.021511 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.021663 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.021967 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.022105 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.022341 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.022400 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.022657 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.022741 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.023152 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.023457 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.019433 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.020228 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.021493 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.021911 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.022162 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.023855 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.024139 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.024423 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.024601 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.024742 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.024757 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.024962 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.025049 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.025121 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.025128 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.025176 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.025396 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.025535 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.025602 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.025860 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.025892 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.026034 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:01:43.52600338 +0000 UTC m=+21.299618408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.026201 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.026537 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.026771 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.026934 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.027048 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.027249 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.027286 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.027360 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.027333 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.027740 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.027790 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.027848 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/001ac9ae-35b3-4f82-abaf-1eb6088441e2-hosts-file\") pod \"node-resolver-hn5qq\" (UID: \"001ac9ae-35b3-4f82-abaf-1eb6088441e2\") " pod="openshift-dns/node-resolver-hn5qq" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.027928 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.028093 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.028299 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.028336 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.028356 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.028382 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.028708 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.028734 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.029064 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.029438 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.029805 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.029988 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.030302 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.030362 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.030363 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.030828 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.030924 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.030995 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.031034 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.031284 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.031250 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.031297 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.031705 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.031812 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.031936 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.031968 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.032165 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.032578 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.032618 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.032621 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.032745 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.032778 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.032676 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.032923 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.032862 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.033159 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.033666 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.033777 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.033694 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.034369 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.034613 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.035024 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.035445 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.035949 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.036078 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.035970 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.036919 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.037467 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.037682 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.037690 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.037735 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.037647 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.037811 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.038995 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.040941 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.043150 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.043373 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.043396 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.043544 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.046087 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.046133 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.046148 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.046170 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.046183 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:43Z","lastTransitionTime":"2026-01-28T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.048297 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5fpz\" (UniqueName: \"kubernetes.io/projected/001ac9ae-35b3-4f82-abaf-1eb6088441e2-kube-api-access-d5fpz\") pod \"node-resolver-hn5qq\" (UID: \"001ac9ae-35b3-4f82-abaf-1eb6088441e2\") " pod="openshift-dns/node-resolver-hn5qq" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.049026 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.051842 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.051847 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.052416 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.052427 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.052545 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.052668 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.052773 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.066973 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.067708 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.068330 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.071956 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.072120 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.073860 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.075324 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.077782 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.079718 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c" exitCode=255 Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.079766 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c"} Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.079831 4893 scope.go:117] "RemoveContainer" containerID="fd6cfaa7c19bafc9f2187d3594841df295db09b64e3ae8ceb519950f3f8aab6b" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.092349 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.101392 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.101792 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.110341 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111012 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111027 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111041 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111051 4893 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111061 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111070 4893 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111082 4893 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111090 4893 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111099 4893 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111107 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111118 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111128 4893 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111137 4893 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111146 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111154 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111163 4893 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111172 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111180 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111189 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111198 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111207 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111216 4893 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111224 4893 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111245 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111255 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111263 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111273 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111281 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111290 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111298 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111306 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111314 4893 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111327 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111348 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111356 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111364 4893 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111372 4893 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111379 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111388 4893 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111396 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111405 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111414 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111423 4893 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111432 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111442 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111451 4893 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111459 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111487 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111497 4893 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111506 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111514 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111523 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111532 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111542 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111554 4893 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111563 4893 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111571 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111581 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111590 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111598 4893 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111606 4893 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111616 4893 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111624 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111633 4893 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111641 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111650 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111658 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111668 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111676 4893 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111686 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111694 4893 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111703 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111712 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111721 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111729 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111737 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111745 4893 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111756 4893 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111764 4893 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111772 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111780 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111789 4893 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111798 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111806 4893 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111814 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111824 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111832 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111840 4893 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111850 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111859 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111868 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111877 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111890 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111899 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111907 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111915 4893 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111924 4893 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111937 4893 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111946 4893 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111955 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111964 4893 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111972 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111981 4893 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.111990 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112000 4893 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112008 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112017 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112025 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112033 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112042 4893 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112051 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112059 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112069 4893 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112077 4893 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112087 4893 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112095 4893 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112104 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112112 4893 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112121 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112130 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112138 4893 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112147 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112156 4893 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112165 4893 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112177 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112186 4893 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112194 4893 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112204 4893 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112213 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112221 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112230 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112239 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112248 4893 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112257 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112265 4893 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112274 4893 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112284 4893 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112294 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112302 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112314 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112323 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112331 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112341 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112351 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112360 4893 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112368 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112376 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112385 4893 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112393 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112403 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112412 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112420 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112429 4893 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112437 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112446 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112455 4893 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112466 4893 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112632 4893 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112641 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112651 4893 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112661 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112671 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112681 4893 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112690 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112700 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112709 4893 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112718 4893 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112728 4893 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112737 4893 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112746 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112756 4893 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112767 4893 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112777 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112786 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112795 4893 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112806 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112816 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112825 4893 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112835 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112844 4893 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112853 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112862 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112871 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112880 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112889 4893 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112898 4893 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112906 4893 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112914 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112925 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112934 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112943 4893 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.112952 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.114219 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-l2nht"] Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.114562 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-h786s"] Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.114948 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-krkz9"] Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.115141 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.115464 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.115740 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.117963 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.118106 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.118232 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.118321 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.119349 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.119519 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.119712 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.119829 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.119946 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.120052 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.120182 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.120376 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.120895 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.124152 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.129753 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.137818 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.145891 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.152280 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.156221 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.156272 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.156288 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.156312 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.156328 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:43Z","lastTransitionTime":"2026-01-28T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.158598 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.169584 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.169816 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.174913 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hn5qq" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.186267 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.206162 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215164 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-multus-conf-dir\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215212 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2ddd967-f9a8-464a-95de-512c9c5874fd-proxy-tls\") pod \"machine-config-daemon-l2nht\" (UID: \"b2ddd967-f9a8-464a-95de-512c9c5874fd\") " pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215236 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-var-lib-kubelet\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215255 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-system-cni-dir\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215286 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8mtt\" (UniqueName: \"kubernetes.io/projected/a51e5a50-969c-4f25-a895-ebb119642512-kube-api-access-p8mtt\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215309 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjrm9\" (UniqueName: \"kubernetes.io/projected/ac863e9c-63ed-4c56-8687-839ba5845dff-kube-api-access-pjrm9\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215345 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b2ddd967-f9a8-464a-95de-512c9c5874fd-rootfs\") pod \"machine-config-daemon-l2nht\" (UID: \"b2ddd967-f9a8-464a-95de-512c9c5874fd\") " pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215405 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b2ddd967-f9a8-464a-95de-512c9c5874fd-mcd-auth-proxy-config\") pod \"machine-config-daemon-l2nht\" (UID: \"b2ddd967-f9a8-464a-95de-512c9c5874fd\") " pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215425 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-cnibin\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215445 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-var-lib-cni-multus\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215484 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-etc-kubernetes\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215509 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ac863e9c-63ed-4c56-8687-839ba5845dff-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215552 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-var-lib-cni-bin\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215575 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-run-k8s-cni-cncf-io\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215594 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-hostroot\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215613 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a51e5a50-969c-4f25-a895-ebb119642512-multus-daemon-config\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215631 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-run-netns\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215648 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ac863e9c-63ed-4c56-8687-839ba5845dff-cni-binary-copy\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215674 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ac863e9c-63ed-4c56-8687-839ba5845dff-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215692 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ac863e9c-63ed-4c56-8687-839ba5845dff-system-cni-dir\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215709 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ac863e9c-63ed-4c56-8687-839ba5845dff-cnibin\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215725 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-os-release\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215740 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a51e5a50-969c-4f25-a895-ebb119642512-cni-binary-copy\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215756 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjvvl\" (UniqueName: \"kubernetes.io/projected/b2ddd967-f9a8-464a-95de-512c9c5874fd-kube-api-access-jjvvl\") pod \"machine-config-daemon-l2nht\" (UID: \"b2ddd967-f9a8-464a-95de-512c9c5874fd\") " pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215771 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ac863e9c-63ed-4c56-8687-839ba5845dff-os-release\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215797 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-multus-cni-dir\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215817 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-multus-socket-dir-parent\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215835 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-run-multus-certs\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215860 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.215871 4893 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.231684 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.243689 4893 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-28 14:56:42 +0000 UTC, rotation deadline is 2026-11-22 16:03:24.741499629 +0000 UTC Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.243930 4893 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7153h1m41.497573708s for next certificate rotation Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.244567 4893 scope.go:117] "RemoveContainer" containerID="fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c" Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.244812 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.245134 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.261959 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.268020 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.268144 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.268217 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.268280 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.268341 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:43Z","lastTransitionTime":"2026-01-28T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.279908 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.303626 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319000 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-run-netns\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319065 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ac863e9c-63ed-4c56-8687-839ba5845dff-cni-binary-copy\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319112 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ac863e9c-63ed-4c56-8687-839ba5845dff-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319185 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ac863e9c-63ed-4c56-8687-839ba5845dff-system-cni-dir\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319238 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ac863e9c-63ed-4c56-8687-839ba5845dff-cnibin\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319278 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjvvl\" (UniqueName: \"kubernetes.io/projected/b2ddd967-f9a8-464a-95de-512c9c5874fd-kube-api-access-jjvvl\") pod \"machine-config-daemon-l2nht\" (UID: \"b2ddd967-f9a8-464a-95de-512c9c5874fd\") " pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319317 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ac863e9c-63ed-4c56-8687-839ba5845dff-os-release\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319352 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-os-release\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319387 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a51e5a50-969c-4f25-a895-ebb119642512-cni-binary-copy\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319413 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-multus-cni-dir\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319439 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-multus-socket-dir-parent\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319466 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-run-multus-certs\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319525 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2ddd967-f9a8-464a-95de-512c9c5874fd-proxy-tls\") pod \"machine-config-daemon-l2nht\" (UID: \"b2ddd967-f9a8-464a-95de-512c9c5874fd\") " pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319550 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-multus-conf-dir\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319573 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-var-lib-kubelet\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319594 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-system-cni-dir\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319621 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8mtt\" (UniqueName: \"kubernetes.io/projected/a51e5a50-969c-4f25-a895-ebb119642512-kube-api-access-p8mtt\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319639 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjrm9\" (UniqueName: \"kubernetes.io/projected/ac863e9c-63ed-4c56-8687-839ba5845dff-kube-api-access-pjrm9\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319678 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b2ddd967-f9a8-464a-95de-512c9c5874fd-mcd-auth-proxy-config\") pod \"machine-config-daemon-l2nht\" (UID: \"b2ddd967-f9a8-464a-95de-512c9c5874fd\") " pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319700 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b2ddd967-f9a8-464a-95de-512c9c5874fd-rootfs\") pod \"machine-config-daemon-l2nht\" (UID: \"b2ddd967-f9a8-464a-95de-512c9c5874fd\") " pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319722 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-cnibin\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319750 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-var-lib-cni-multus\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319783 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-etc-kubernetes\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.319947 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ac863e9c-63ed-4c56-8687-839ba5845dff-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.320010 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-var-lib-cni-bin\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.320058 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-run-k8s-cni-cncf-io\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.320093 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-hostroot\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.320115 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a51e5a50-969c-4f25-a895-ebb119642512-multus-daemon-config\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.321060 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a51e5a50-969c-4f25-a895-ebb119642512-multus-daemon-config\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.321146 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-run-netns\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.321796 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ac863e9c-63ed-4c56-8687-839ba5845dff-cni-binary-copy\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.322844 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ac863e9c-63ed-4c56-8687-839ba5845dff-tuning-conf-dir\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.322920 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ac863e9c-63ed-4c56-8687-839ba5845dff-system-cni-dir\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.322971 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ac863e9c-63ed-4c56-8687-839ba5845dff-cnibin\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.323455 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ac863e9c-63ed-4c56-8687-839ba5845dff-os-release\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.323569 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-os-release\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.324740 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a51e5a50-969c-4f25-a895-ebb119642512-cni-binary-copy\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.325057 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-multus-cni-dir\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.325267 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-multus-socket-dir-parent\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.325340 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b2ddd967-f9a8-464a-95de-512c9c5874fd-rootfs\") pod \"machine-config-daemon-l2nht\" (UID: \"b2ddd967-f9a8-464a-95de-512c9c5874fd\") " pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.325383 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-cnibin\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.325421 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-var-lib-cni-multus\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.325463 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-etc-kubernetes\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.325593 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-var-lib-cni-bin\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.325857 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-hostroot\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.326748 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b2ddd967-f9a8-464a-95de-512c9c5874fd-mcd-auth-proxy-config\") pod \"machine-config-daemon-l2nht\" (UID: \"b2ddd967-f9a8-464a-95de-512c9c5874fd\") " pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.327037 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-run-k8s-cni-cncf-io\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.327142 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.327662 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ac863e9c-63ed-4c56-8687-839ba5845dff-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.328608 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-system-cni-dir\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.328669 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-var-lib-kubelet\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.329095 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-multus-conf-dir\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.329240 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a51e5a50-969c-4f25-a895-ebb119642512-host-run-multus-certs\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.331973 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b2ddd967-f9a8-464a-95de-512c9c5874fd-proxy-tls\") pod \"machine-config-daemon-l2nht\" (UID: \"b2ddd967-f9a8-464a-95de-512c9c5874fd\") " pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.338221 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.339917 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjvvl\" (UniqueName: \"kubernetes.io/projected/b2ddd967-f9a8-464a-95de-512c9c5874fd-kube-api-access-jjvvl\") pod \"machine-config-daemon-l2nht\" (UID: \"b2ddd967-f9a8-464a-95de-512c9c5874fd\") " pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.344401 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8mtt\" (UniqueName: \"kubernetes.io/projected/a51e5a50-969c-4f25-a895-ebb119642512-kube-api-access-p8mtt\") pod \"multus-krkz9\" (UID: \"a51e5a50-969c-4f25-a895-ebb119642512\") " pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.348151 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjrm9\" (UniqueName: \"kubernetes.io/projected/ac863e9c-63ed-4c56-8687-839ba5845dff-kube-api-access-pjrm9\") pod \"multus-additional-cni-plugins-h786s\" (UID: \"ac863e9c-63ed-4c56-8687-839ba5845dff\") " pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.348352 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.359269 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.371037 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.371081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.371094 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.371111 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.371122 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:43Z","lastTransitionTime":"2026-01-28T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.372538 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.386241 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.420812 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.420854 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.420878 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.420901 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.420976 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.420995 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.421011 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.421023 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:44.421009561 +0000 UTC m=+22.194624589 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.421014 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.421031 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.421048 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.421049 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.421065 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.421073 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:44.421052123 +0000 UTC m=+22.194667151 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.421089 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:44.421082903 +0000 UTC m=+22.194697931 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.421101 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:44.421095224 +0000 UTC m=+22.194710242 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.440229 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-krkz9" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.450593 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.460947 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-h786s" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.478715 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5q54w"] Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.480106 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.481932 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.482018 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.482098 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.482231 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.482300 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.483166 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.483781 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.485543 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.485593 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.485607 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.485626 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.485637 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:43Z","lastTransitionTime":"2026-01-28T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:43 crc kubenswrapper[4893]: W0128 15:01:43.489972 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac863e9c_63ed_4c56_8687_839ba5845dff.slice/crio-3492f1300256c3596d8eb28ad3412e9470fe0af74f4176eedace39a16247577b WatchSource:0}: Error finding container 3492f1300256c3596d8eb28ad3412e9470fe0af74f4176eedace39a16247577b: Status 404 returned error can't find the container with id 3492f1300256c3596d8eb28ad3412e9470fe0af74f4176eedace39a16247577b Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.494055 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.502752 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.514579 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.527328 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.539845 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.556791 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd6cfaa7c19bafc9f2187d3594841df295db09b64e3ae8ceb519950f3f8aab6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:36Z\\\",\\\"message\\\":\\\"W0128 15:01:26.138588 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 15:01:26.139199 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769612486 cert, and key in /tmp/serving-cert-1145524057/serving-signer.crt, /tmp/serving-cert-1145524057/serving-signer.key\\\\nI0128 15:01:26.422576 1 observer_polling.go:159] Starting file observer\\\\nW0128 15:01:26.426168 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:01:26.426614 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:26.428772 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1145524057/tls.crt::/tmp/serving-cert-1145524057/tls.key\\\\\\\"\\\\nF0128 15:01:36.741363 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.574299 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.595833 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.595891 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.595904 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.595922 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.595932 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:43Z","lastTransitionTime":"2026-01-28T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.599893 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.615921 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.622636 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.622805 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-var-lib-openvswitch\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.622834 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovnkube-script-lib\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.622854 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-openvswitch\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.622872 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-log-socket\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.622889 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwtf7\" (UniqueName: \"kubernetes.io/projected/135b9f51-26ac-44c4-a817-cbfa4b36ae54-kube-api-access-gwtf7\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.622907 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-cni-bin\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.622937 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovnkube-config\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.622953 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-env-overrides\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.622971 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-etc-openvswitch\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.622991 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-systemd-units\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.623010 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-run-netns\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.623029 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.623049 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-slash\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.623066 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-ovn\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.623091 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-kubelet\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.623108 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-run-ovn-kubernetes\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.623323 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovn-node-metrics-cert\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.623349 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-systemd\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.623372 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-node-log\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.623410 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-cni-netd\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: E0128 15:01:43.623579 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:01:44.623558859 +0000 UTC m=+22.397173887 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.635205 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.649512 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.666195 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.699772 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.699826 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.699848 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.699871 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.699889 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:43Z","lastTransitionTime":"2026-01-28T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.723947 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-var-lib-openvswitch\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.723993 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovnkube-script-lib\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724015 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwtf7\" (UniqueName: \"kubernetes.io/projected/135b9f51-26ac-44c4-a817-cbfa4b36ae54-kube-api-access-gwtf7\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724033 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-openvswitch\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724054 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-log-socket\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724073 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-cni-bin\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724089 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovnkube-config\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724092 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-var-lib-openvswitch\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724105 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-env-overrides\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724196 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-log-socket\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724263 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-etc-openvswitch\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724290 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-systemd-units\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724301 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-openvswitch\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724313 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-run-netns\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724334 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-cni-bin\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724338 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724375 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-slash\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724404 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-ovn\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724431 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-kubelet\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724497 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-run-ovn-kubernetes\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724528 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovn-node-metrics-cert\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724558 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-systemd\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724580 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-cni-netd\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724614 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-node-log\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724723 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-node-log\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724759 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-etc-openvswitch\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724789 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-systemd-units\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724844 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-run-netns\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724875 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724907 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-slash\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724936 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-ovn\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724967 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-kubelet\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.724997 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-run-ovn-kubernetes\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.725218 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-env-overrides\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.725284 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-cni-netd\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.725367 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-systemd\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.725535 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovnkube-config\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.725836 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovnkube-script-lib\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.730781 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovn-node-metrics-cert\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.745397 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwtf7\" (UniqueName: \"kubernetes.io/projected/135b9f51-26ac-44c4-a817-cbfa4b36ae54-kube-api-access-gwtf7\") pod \"ovnkube-node-5q54w\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.803277 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.803357 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.803376 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.803408 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.803425 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:43Z","lastTransitionTime":"2026-01-28T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.811863 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.840554 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:14:30.084534678 +0000 UTC Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.906327 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.906383 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.906395 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.906415 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:43 crc kubenswrapper[4893]: I0128 15:01:43.906429 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:43Z","lastTransitionTime":"2026-01-28T15:01:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.009647 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.010130 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.010150 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.010180 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.010206 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:44Z","lastTransitionTime":"2026-01-28T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.086414 4893 generic.go:334] "Generic (PLEG): container finished" podID="ac863e9c-63ed-4c56-8687-839ba5845dff" containerID="0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f" exitCode=0 Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.086554 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" event={"ID":"ac863e9c-63ed-4c56-8687-839ba5845dff","Type":"ContainerDied","Data":"0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.086641 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" event={"ID":"ac863e9c-63ed-4c56-8687-839ba5845dff","Type":"ContainerStarted","Data":"3492f1300256c3596d8eb28ad3412e9470fe0af74f4176eedace39a16247577b"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.092732 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hn5qq" event={"ID":"001ac9ae-35b3-4f82-abaf-1eb6088441e2","Type":"ContainerStarted","Data":"92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.092792 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hn5qq" event={"ID":"001ac9ae-35b3-4f82-abaf-1eb6088441e2","Type":"ContainerStarted","Data":"aaab7b04365a8d4c4db4379eb3fe8412a8d49d3c32f1dd1081ead2ff10bee8e3"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.095125 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5f38ec25e4b08f3c8e3878e09791cb0b29f0203b26db1e75de8786f1e1de68e1"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.097579 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krkz9" event={"ID":"a51e5a50-969c-4f25-a895-ebb119642512","Type":"ContainerStarted","Data":"4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.097636 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krkz9" event={"ID":"a51e5a50-969c-4f25-a895-ebb119642512","Type":"ContainerStarted","Data":"159b2259c0df1c1d7c5a22c9dafef00e7227127abb39d4657b5c78f585f060fe"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.100247 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.100280 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.100292 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ae6faffa44cdb9af93e3ad3fbec15bdfc3b21b12ae80f8d5d945622425f4121a"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.102392 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.107070 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.107765 4893 scope.go:117] "RemoveContainer" containerID="fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c" Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.107990 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.108753 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerStarted","Data":"2620d2575a3a7001dc1d2d5fa4b7c024a4805b9ccb7bdca328b505b9cd1f7991"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.111905 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerStarted","Data":"a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.112036 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerStarted","Data":"d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.112062 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerStarted","Data":"93788edf39f2d65acb6a843d363b9d9379c5f09f46ce3438c2c3377f7a1dd54c"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.112519 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.112559 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.112575 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.112598 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.112616 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:44Z","lastTransitionTime":"2026-01-28T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.115600 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.115641 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e4b3d23a44f34833272571ac2e766e29af86a782a5797b2d3a7dd5e860b1455c"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.124958 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd6cfaa7c19bafc9f2187d3594841df295db09b64e3ae8ceb519950f3f8aab6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:36Z\\\",\\\"message\\\":\\\"W0128 15:01:26.138588 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 15:01:26.139199 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769612486 cert, and key in /tmp/serving-cert-1145524057/serving-signer.crt, /tmp/serving-cert-1145524057/serving-signer.key\\\\nI0128 15:01:26.422576 1 observer_polling.go:159] Starting file observer\\\\nW0128 15:01:26.426168 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:01:26.426614 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:26.428772 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1145524057/tls.crt::/tmp/serving-cert-1145524057/tls.key\\\\\\\"\\\\nF0128 15:01:36.741363 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.139265 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.151404 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.168101 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.185794 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.202984 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.215791 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.215828 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.215838 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.215855 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.215866 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:44Z","lastTransitionTime":"2026-01-28T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.219517 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.236582 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.255538 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.269640 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.281516 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.295468 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.307636 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.320379 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.320418 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.320427 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.320443 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.320452 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:44Z","lastTransitionTime":"2026-01-28T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.324088 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.339687 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.356006 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.371595 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.386772 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.400203 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.414557 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.423458 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.423512 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.423522 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.423538 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.423547 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:44Z","lastTransitionTime":"2026-01-28T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.428041 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.431066 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.431116 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.431151 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.431182 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.431271 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.431314 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.431356 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.431357 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.431364 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.431370 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:46.431345599 +0000 UTC m=+24.204960797 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.431498 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:46.431461712 +0000 UTC m=+24.205076930 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.431372 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.431581 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:46.431555625 +0000 UTC m=+24.205170653 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.431386 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.431613 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.431640 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:46.431633737 +0000 UTC m=+24.205248765 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.445808 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.467393 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.526079 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.526235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.526258 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.526274 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.526600 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:44Z","lastTransitionTime":"2026-01-28T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.629654 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.629697 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.629710 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.629728 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.629742 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:44Z","lastTransitionTime":"2026-01-28T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.633985 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.634365 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:01:46.634328937 +0000 UTC m=+24.407943975 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.732930 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.732996 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.733010 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.733028 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.733038 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:44Z","lastTransitionTime":"2026-01-28T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.837610 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.837663 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.837675 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.837695 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.837707 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:44Z","lastTransitionTime":"2026-01-28T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.840767 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:28:21.988979803 +0000 UTC Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.891228 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.891847 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.891275 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.892172 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.891230 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:44 crc kubenswrapper[4893]: E0128 15:01:44.892539 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.898217 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.899674 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.902466 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.903660 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.905627 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.907170 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.908628 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.910974 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.912685 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.914125 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.915048 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.916851 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.921843 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.922971 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.923612 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.925296 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.925979 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.927445 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.928291 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.929087 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.931072 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.931788 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.932226 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.933373 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.933805 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.935000 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.936112 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.937303 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.938895 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.940166 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.940968 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.941009 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.941024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.941045 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.941059 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:44Z","lastTransitionTime":"2026-01-28T15:01:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.941267 4893 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.941381 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.943053 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.943853 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.944454 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.945788 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.948778 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.949628 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.951136 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.952508 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.953625 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.954429 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.956025 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.956688 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.957871 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.958428 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.959396 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.960222 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.961160 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.961689 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.962154 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.963027 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.963639 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.964503 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.965049 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.967044 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-r6mxl"] Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.967390 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-r6mxl" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.973790 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.974160 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.974294 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.974441 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.974974 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.977581 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.984461 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:44 crc kubenswrapper[4893]: I0128 15:01:44.999558 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.015135 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.028732 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.044007 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.044067 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.044081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.044102 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.044118 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:45Z","lastTransitionTime":"2026-01-28T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.046723 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.060643 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.077612 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.095232 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.118975 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.120142 4893 generic.go:334] "Generic (PLEG): container finished" podID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerID="c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654" exitCode=0 Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.120234 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerDied","Data":"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654"} Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.122646 4893 generic.go:334] "Generic (PLEG): container finished" podID="ac863e9c-63ed-4c56-8687-839ba5845dff" containerID="f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2" exitCode=0 Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.122734 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" event={"ID":"ac863e9c-63ed-4c56-8687-839ba5845dff","Type":"ContainerDied","Data":"f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2"} Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.138870 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5147fe08-c025-48e8-a623-263b1452e810-host\") pod \"node-ca-r6mxl\" (UID: \"5147fe08-c025-48e8-a623-263b1452e810\") " pod="openshift-image-registry/node-ca-r6mxl" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.138925 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5147fe08-c025-48e8-a623-263b1452e810-serviceca\") pod \"node-ca-r6mxl\" (UID: \"5147fe08-c025-48e8-a623-263b1452e810\") " pod="openshift-image-registry/node-ca-r6mxl" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.138969 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q4ks\" (UniqueName: \"kubernetes.io/projected/5147fe08-c025-48e8-a623-263b1452e810-kube-api-access-5q4ks\") pod \"node-ca-r6mxl\" (UID: \"5147fe08-c025-48e8-a623-263b1452e810\") " pod="openshift-image-registry/node-ca-r6mxl" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.144000 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.146768 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.146812 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.146824 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.146842 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.146854 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:45Z","lastTransitionTime":"2026-01-28T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.178035 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.191635 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.204342 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.218836 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.239157 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.239867 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5147fe08-c025-48e8-a623-263b1452e810-host\") pod \"node-ca-r6mxl\" (UID: \"5147fe08-c025-48e8-a623-263b1452e810\") " pod="openshift-image-registry/node-ca-r6mxl" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.239927 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5147fe08-c025-48e8-a623-263b1452e810-serviceca\") pod \"node-ca-r6mxl\" (UID: \"5147fe08-c025-48e8-a623-263b1452e810\") " pod="openshift-image-registry/node-ca-r6mxl" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.239957 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q4ks\" (UniqueName: \"kubernetes.io/projected/5147fe08-c025-48e8-a623-263b1452e810-kube-api-access-5q4ks\") pod \"node-ca-r6mxl\" (UID: \"5147fe08-c025-48e8-a623-263b1452e810\") " pod="openshift-image-registry/node-ca-r6mxl" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.240031 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5147fe08-c025-48e8-a623-263b1452e810-host\") pod \"node-ca-r6mxl\" (UID: \"5147fe08-c025-48e8-a623-263b1452e810\") " pod="openshift-image-registry/node-ca-r6mxl" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.241989 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5147fe08-c025-48e8-a623-263b1452e810-serviceca\") pod \"node-ca-r6mxl\" (UID: \"5147fe08-c025-48e8-a623-263b1452e810\") " pod="openshift-image-registry/node-ca-r6mxl" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.249257 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.249286 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.249295 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.249309 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.249318 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:45Z","lastTransitionTime":"2026-01-28T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.260767 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.274369 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q4ks\" (UniqueName: \"kubernetes.io/projected/5147fe08-c025-48e8-a623-263b1452e810-kube-api-access-5q4ks\") pod \"node-ca-r6mxl\" (UID: \"5147fe08-c025-48e8-a623-263b1452e810\") " pod="openshift-image-registry/node-ca-r6mxl" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.288036 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.300736 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-r6mxl" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.309046 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.345411 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.351204 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.351255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.351270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.351306 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.351318 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:45Z","lastTransitionTime":"2026-01-28T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.399947 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.419051 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.452400 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.454260 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.454308 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.454320 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.454336 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.454347 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:45Z","lastTransitionTime":"2026-01-28T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.492406 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.537367 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.557606 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.557663 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.557674 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.557694 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.557710 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:45Z","lastTransitionTime":"2026-01-28T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.575518 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.609815 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.661604 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.662054 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.662065 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.662084 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.662105 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:45Z","lastTransitionTime":"2026-01-28T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.764390 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.764424 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.764438 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.764453 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.764462 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:45Z","lastTransitionTime":"2026-01-28T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.841586 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 03:49:04.008633146 +0000 UTC Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.867131 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.867329 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.867438 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.867571 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.867640 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:45Z","lastTransitionTime":"2026-01-28T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.970522 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.970564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.970574 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.970597 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:45 crc kubenswrapper[4893]: I0128 15:01:45.970607 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:45Z","lastTransitionTime":"2026-01-28T15:01:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.073398 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.073445 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.073458 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.073498 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.073519 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:46Z","lastTransitionTime":"2026-01-28T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.129423 4893 generic.go:334] "Generic (PLEG): container finished" podID="ac863e9c-63ed-4c56-8687-839ba5845dff" containerID="3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8" exitCode=0 Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.129533 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" event={"ID":"ac863e9c-63ed-4c56-8687-839ba5845dff","Type":"ContainerDied","Data":"3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.133672 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerStarted","Data":"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.133735 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerStarted","Data":"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.133755 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerStarted","Data":"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.133769 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerStarted","Data":"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.133784 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerStarted","Data":"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.135330 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-r6mxl" event={"ID":"5147fe08-c025-48e8-a623-263b1452e810","Type":"ContainerStarted","Data":"fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.135380 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-r6mxl" event={"ID":"5147fe08-c025-48e8-a623-263b1452e810","Type":"ContainerStarted","Data":"2a63210855df49031f19fb6ef8b4bc73b543ae36a4477f742f7f223e3d6e748a"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.145260 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.162649 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.178943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.179012 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.179031 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.178988 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.179057 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.179076 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:46Z","lastTransitionTime":"2026-01-28T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.195050 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.209739 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.224377 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.240903 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.253382 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.269747 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.281930 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.281982 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.281995 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.282014 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.282025 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:46Z","lastTransitionTime":"2026-01-28T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.290440 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.307285 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.322397 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.339196 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.351902 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.369629 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.384445 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.384508 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.384522 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.384539 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.384550 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:46Z","lastTransitionTime":"2026-01-28T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.386385 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.400975 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.424134 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.448194 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.453466 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.453550 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.453586 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.453618 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.453695 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.453734 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.453752 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.453738 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.453840 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:50.453817232 +0000 UTC m=+28.227432260 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.453864 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.453873 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:50.453849173 +0000 UTC m=+28.227464251 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.453877 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.453923 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.453937 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.453954 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:50.453915845 +0000 UTC m=+28.227530943 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.453998 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:50.453981047 +0000 UTC m=+28.227596075 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.464955 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.476763 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.487864 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.487916 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.487931 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.487951 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.487961 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:46Z","lastTransitionTime":"2026-01-28T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.494605 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.533966 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.575285 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.590315 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.590384 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.590408 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.590437 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.590459 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:46Z","lastTransitionTime":"2026-01-28T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.616237 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.650518 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.655844 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.656004 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:01:50.655975048 +0000 UTC m=+28.429590096 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.691021 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.692599 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.692630 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.692641 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.692657 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.692666 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:46Z","lastTransitionTime":"2026-01-28T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.730208 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.796173 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.796499 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.796602 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.796721 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.796826 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:46Z","lastTransitionTime":"2026-01-28T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.797174 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.798044 4893 scope.go:117] "RemoveContainer" containerID="fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c" Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.798331 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.843351 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:28:42.132767193 +0000 UTC Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.891304 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.891519 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.891327 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.891299 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.891667 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:01:46 crc kubenswrapper[4893]: E0128 15:01:46.891979 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.898981 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.899031 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.899045 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.899062 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:46 crc kubenswrapper[4893]: I0128 15:01:46.899075 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:46Z","lastTransitionTime":"2026-01-28T15:01:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.001284 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.001531 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.001625 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.001743 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.001847 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:47Z","lastTransitionTime":"2026-01-28T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.104284 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.104537 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.104652 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.104771 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.104866 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:47Z","lastTransitionTime":"2026-01-28T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.140427 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee"} Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.144111 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerStarted","Data":"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0"} Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.147045 4893 generic.go:334] "Generic (PLEG): container finished" podID="ac863e9c-63ed-4c56-8687-839ba5845dff" containerID="c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0" exitCode=0 Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.147116 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" event={"ID":"ac863e9c-63ed-4c56-8687-839ba5845dff","Type":"ContainerDied","Data":"c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0"} Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.159560 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.174399 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.189924 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.205792 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.209228 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.209264 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.209276 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.209292 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.209304 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:47Z","lastTransitionTime":"2026-01-28T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.217300 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.228401 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.242343 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.254390 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.265467 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.280859 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.301995 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.315426 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.315485 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.315499 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.315516 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.315527 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:47Z","lastTransitionTime":"2026-01-28T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.320785 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.333300 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.348000 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.366077 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.387347 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.414317 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.418441 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.418518 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.418531 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.418552 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.418562 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:47Z","lastTransitionTime":"2026-01-28T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.453136 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.492668 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.521569 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.521630 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.521657 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.521681 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.521697 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:47Z","lastTransitionTime":"2026-01-28T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.533343 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.573056 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.612114 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.624634 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.624687 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.624704 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.624730 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.624750 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:47Z","lastTransitionTime":"2026-01-28T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.657707 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.694921 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.727679 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.727746 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.727759 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.727778 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.727788 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:47Z","lastTransitionTime":"2026-01-28T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.737539 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.775750 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.811866 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.830138 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.830185 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.830198 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.830217 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.830226 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:47Z","lastTransitionTime":"2026-01-28T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.844322 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 09:11:58.427220327 +0000 UTC Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.852456 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:47Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.932467 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.932729 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.932809 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.932900 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:47 crc kubenswrapper[4893]: I0128 15:01:47.932958 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:47Z","lastTransitionTime":"2026-01-28T15:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.034879 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.035181 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.035281 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.035375 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.035500 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:48Z","lastTransitionTime":"2026-01-28T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.138353 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.138403 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.138417 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.138434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.138446 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:48Z","lastTransitionTime":"2026-01-28T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.153249 4893 generic.go:334] "Generic (PLEG): container finished" podID="ac863e9c-63ed-4c56-8687-839ba5845dff" containerID="5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e" exitCode=0 Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.154012 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" event={"ID":"ac863e9c-63ed-4c56-8687-839ba5845dff","Type":"ContainerDied","Data":"5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e"} Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.178936 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.198629 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.214992 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.230202 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.241230 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.241286 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.241299 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.241324 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.241335 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:48Z","lastTransitionTime":"2026-01-28T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.246790 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.263064 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.277425 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.296616 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.308712 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.326153 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.344246 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.344316 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.344329 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.344406 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.344421 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:48Z","lastTransitionTime":"2026-01-28T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.348854 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.366114 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.379380 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.413509 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.447821 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.447860 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.447869 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.447887 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.447898 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:48Z","lastTransitionTime":"2026-01-28T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.551061 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.551113 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.551127 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.551148 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.551161 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:48Z","lastTransitionTime":"2026-01-28T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.654613 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.654863 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.654874 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.654890 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.654901 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:48Z","lastTransitionTime":"2026-01-28T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.757852 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.757897 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.757909 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.757927 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.757938 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:48Z","lastTransitionTime":"2026-01-28T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.845010 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:20:24.981087494 +0000 UTC Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.861590 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.862164 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.862179 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.862238 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.862257 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:48Z","lastTransitionTime":"2026-01-28T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.891349 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:48 crc kubenswrapper[4893]: E0128 15:01:48.891563 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.891824 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.891944 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:48 crc kubenswrapper[4893]: E0128 15:01:48.891988 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:01:48 crc kubenswrapper[4893]: E0128 15:01:48.892156 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.965293 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.965341 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.965352 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.965372 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:48 crc kubenswrapper[4893]: I0128 15:01:48.965384 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:48Z","lastTransitionTime":"2026-01-28T15:01:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.068054 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.068293 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.068355 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.068425 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.068516 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:49Z","lastTransitionTime":"2026-01-28T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.161925 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerStarted","Data":"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af"} Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.166173 4893 generic.go:334] "Generic (PLEG): container finished" podID="ac863e9c-63ed-4c56-8687-839ba5845dff" containerID="35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba" exitCode=0 Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.166239 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" event={"ID":"ac863e9c-63ed-4c56-8687-839ba5845dff","Type":"ContainerDied","Data":"35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba"} Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.172291 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.172348 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.172362 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.172384 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.172400 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:49Z","lastTransitionTime":"2026-01-28T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.180465 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.198419 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.210791 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.227759 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.251148 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.270988 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.275742 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.275826 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.275846 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.275871 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.275888 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:49Z","lastTransitionTime":"2026-01-28T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.285233 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.298101 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.313422 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.330560 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.346604 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.363089 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.379071 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.379106 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.379115 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.379135 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.379144 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:49Z","lastTransitionTime":"2026-01-28T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.378943 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.395489 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.482015 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.482073 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.482087 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.482107 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.482117 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:49Z","lastTransitionTime":"2026-01-28T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.585209 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.585259 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.585276 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.585294 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.585308 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:49Z","lastTransitionTime":"2026-01-28T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.688932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.689020 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.689046 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.689081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.689106 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:49Z","lastTransitionTime":"2026-01-28T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.792033 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.792087 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.792102 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.792122 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.792141 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:49Z","lastTransitionTime":"2026-01-28T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.845203 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:21:15.794803088 +0000 UTC Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.894882 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.894938 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.894951 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.894972 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.894986 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:49Z","lastTransitionTime":"2026-01-28T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.998232 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.998279 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.998292 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.998311 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:49 crc kubenswrapper[4893]: I0128 15:01:49.998323 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:49Z","lastTransitionTime":"2026-01-28T15:01:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.101462 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.101529 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.101543 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.101561 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.101574 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:50Z","lastTransitionTime":"2026-01-28T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.173745 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" event={"ID":"ac863e9c-63ed-4c56-8687-839ba5845dff","Type":"ContainerStarted","Data":"8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c"} Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.189176 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.203613 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.203682 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.203695 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.203721 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.203736 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:50Z","lastTransitionTime":"2026-01-28T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.207321 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.222904 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.234022 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.249850 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.265926 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.279541 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.284702 4893 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.292843 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.305925 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.305981 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.305995 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.306014 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.306026 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:50Z","lastTransitionTime":"2026-01-28T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.307536 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.324012 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.345051 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.361805 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.372630 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.383029 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.401036 4893 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.408550 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.408585 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.408594 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.408608 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.408617 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:50Z","lastTransitionTime":"2026-01-28T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.510909 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.511389 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.511408 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.511432 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.511446 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:50Z","lastTransitionTime":"2026-01-28T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.523846 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.523896 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.523919 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.523952 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.524069 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.524103 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.524124 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.524152 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.524114 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.524189 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.524205 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.524169 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:58.524147092 +0000 UTC m=+36.297762120 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.524166 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.524279 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:58.524257445 +0000 UTC m=+36.297872473 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.524298 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:58.524290366 +0000 UTC m=+36.297905624 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.524328 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:58.524316517 +0000 UTC m=+36.297931795 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.614190 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.614236 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.614246 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.614274 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.614286 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:50Z","lastTransitionTime":"2026-01-28T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.716878 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.716933 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.716947 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.716973 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.716990 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:50Z","lastTransitionTime":"2026-01-28T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.725660 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.725975 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:01:58.725834786 +0000 UTC m=+36.499449814 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.820051 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.820082 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.820092 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.820106 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.820115 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:50Z","lastTransitionTime":"2026-01-28T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.846355 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:21:40.832439314 +0000 UTC Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.890826 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.890826 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.891004 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.890851 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.891268 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:01:50 crc kubenswrapper[4893]: E0128 15:01:50.891387 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.922162 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.922204 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.922216 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.922233 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:50 crc kubenswrapper[4893]: I0128 15:01:50.922246 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:50Z","lastTransitionTime":"2026-01-28T15:01:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.025283 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.025347 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.025363 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.025386 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.025402 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:51Z","lastTransitionTime":"2026-01-28T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.128069 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.128343 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.128405 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.128530 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.128593 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:51Z","lastTransitionTime":"2026-01-28T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.186780 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerStarted","Data":"97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf"} Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.187266 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.206144 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.222163 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.222922 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.234097 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.234151 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.234168 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.234188 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.234204 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:51Z","lastTransitionTime":"2026-01-28T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.244860 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.261205 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.277065 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.294181 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.308210 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.329256 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.337712 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.337802 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.337832 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.337871 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.337893 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:51Z","lastTransitionTime":"2026-01-28T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.344908 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.356436 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.368637 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.385367 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.404560 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.417861 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.433737 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.441202 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.441283 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.441309 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.441341 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.441361 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:51Z","lastTransitionTime":"2026-01-28T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.451073 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.467100 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.481375 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.497423 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.513923 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.529752 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.543669 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.543737 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.543758 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.543786 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.543807 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:51Z","lastTransitionTime":"2026-01-28T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.547238 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.562746 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.579791 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.600272 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.614860 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.625647 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.638177 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.645782 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.645824 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.645834 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.645852 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.645863 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:51Z","lastTransitionTime":"2026-01-28T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.749069 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.749309 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.749369 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.749566 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.749608 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:51Z","lastTransitionTime":"2026-01-28T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.846829 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:25:50.632140763 +0000 UTC Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.852751 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.852810 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.852823 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.852843 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.852889 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:51Z","lastTransitionTime":"2026-01-28T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.955989 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.956031 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.956043 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.956059 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:51 crc kubenswrapper[4893]: I0128 15:01:51.956073 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:51Z","lastTransitionTime":"2026-01-28T15:01:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.059379 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.059420 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.059432 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.059447 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.059456 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:52Z","lastTransitionTime":"2026-01-28T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.163107 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.163174 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.163194 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.163220 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.163241 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:52Z","lastTransitionTime":"2026-01-28T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.191025 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.191082 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.221281 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.246376 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.267169 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.267235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.267257 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.267280 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.267295 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:52Z","lastTransitionTime":"2026-01-28T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.269306 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.290621 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.309455 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.329851 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.351547 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.366699 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.370372 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.370424 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.370437 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.370459 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.370488 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:52Z","lastTransitionTime":"2026-01-28T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.379688 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.396950 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.416410 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.428072 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.440766 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.452697 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.464635 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.473376 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.473423 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.473437 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.473456 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.473492 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:52Z","lastTransitionTime":"2026-01-28T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.549849 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.549898 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.549916 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.549937 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.549949 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:52Z","lastTransitionTime":"2026-01-28T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: E0128 15:01:52.563394 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.566642 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.566688 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.566701 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.566719 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.566731 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:52Z","lastTransitionTime":"2026-01-28T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: E0128 15:01:52.578933 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.583426 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.583505 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.583523 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.583540 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.583551 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:52Z","lastTransitionTime":"2026-01-28T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: E0128 15:01:52.596290 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.600469 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.600746 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.600765 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.600794 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.600815 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:52Z","lastTransitionTime":"2026-01-28T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: E0128 15:01:52.613187 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.617166 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.617212 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.617224 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.617245 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.617357 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:52Z","lastTransitionTime":"2026-01-28T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: E0128 15:01:52.638398 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: E0128 15:01:52.638539 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.640322 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.640373 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.640390 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.640420 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.640435 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:52Z","lastTransitionTime":"2026-01-28T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.743255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.743305 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.743315 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.743335 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.743346 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:52Z","lastTransitionTime":"2026-01-28T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.846004 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.846046 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.846060 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.846080 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.846090 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:52Z","lastTransitionTime":"2026-01-28T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.847080 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:53:02.853153572 +0000 UTC Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.891695 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.891792 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.891805 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:52 crc kubenswrapper[4893]: E0128 15:01:52.891912 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:01:52 crc kubenswrapper[4893]: E0128 15:01:52.892008 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:01:52 crc kubenswrapper[4893]: E0128 15:01:52.892102 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.905455 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.921756 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.936494 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.952440 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.952581 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.952678 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.953851 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.953892 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:52Z","lastTransitionTime":"2026-01-28T15:01:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.957874 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:52 crc kubenswrapper[4893]: I0128 15:01:52.981576 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.001257 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.014166 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.034238 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.056551 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.065235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.065415 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.065436 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.065454 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.065465 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:53Z","lastTransitionTime":"2026-01-28T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.078966 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.095050 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.114599 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.129280 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.145228 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.169098 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.169142 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.169152 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.169166 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.169176 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:53Z","lastTransitionTime":"2026-01-28T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.272375 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.272431 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.272452 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.272508 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.272530 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:53Z","lastTransitionTime":"2026-01-28T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.376013 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.376091 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.376111 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.376171 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.376191 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:53Z","lastTransitionTime":"2026-01-28T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.479177 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.479266 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.479288 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.479313 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.479331 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:53Z","lastTransitionTime":"2026-01-28T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.582346 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.582404 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.582418 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.582441 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.582457 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:53Z","lastTransitionTime":"2026-01-28T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.685918 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.685979 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.685990 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.686010 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.686024 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:53Z","lastTransitionTime":"2026-01-28T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.789415 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.789497 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.789515 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.789587 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.789600 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:53Z","lastTransitionTime":"2026-01-28T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.848148 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:53:09.507293292 +0000 UTC Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.892813 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.893129 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.893139 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.893156 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.893168 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:53Z","lastTransitionTime":"2026-01-28T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.997016 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.997067 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.997082 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.997107 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:53 crc kubenswrapper[4893]: I0128 15:01:53.997125 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:53Z","lastTransitionTime":"2026-01-28T15:01:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.100554 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.100633 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.100653 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.100682 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.100704 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:54Z","lastTransitionTime":"2026-01-28T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.199993 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/0.log" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.203210 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.203267 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.203294 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.203323 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.203345 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:54Z","lastTransitionTime":"2026-01-28T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.205100 4893 generic.go:334] "Generic (PLEG): container finished" podID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerID="97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf" exitCode=1 Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.205178 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerDied","Data":"97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf"} Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.205889 4893 scope.go:117] "RemoveContainer" containerID="97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.232527 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.250206 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.271607 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.303688 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:53Z\\\",\\\"message\\\":\\\"53.845419 6196 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:01:53.845652 6196 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:01:53.845781 6196 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.845935 6196 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.846218 6196 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:01:53.846738 6196 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.846867 6196 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.847346 6196 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:01:53.847384 6196 factory.go:656] Stopping watch factory\\\\nI0128 15:01:53.847408 6196 ovnkube.go:599] Stopped ovnkube\\\\nI0128 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.306283 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.306316 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.306329 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.306350 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.306364 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:54Z","lastTransitionTime":"2026-01-28T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.319812 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.373041 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.378493 4893 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.410511 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.411963 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.412002 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.412016 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.412035 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.412049 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:54Z","lastTransitionTime":"2026-01-28T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.426183 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.441766 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.456581 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.469695 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.484344 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.498290 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.511023 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.514951 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.514981 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.514994 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.515010 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.515019 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:54Z","lastTransitionTime":"2026-01-28T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.618118 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.618169 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.618181 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.618200 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.618215 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:54Z","lastTransitionTime":"2026-01-28T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.720434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.720507 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.720522 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.720540 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.720553 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:54Z","lastTransitionTime":"2026-01-28T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.818006 4893 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.822932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.822986 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.822999 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.823022 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.823041 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:54Z","lastTransitionTime":"2026-01-28T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.848600 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:45:21.687862479 +0000 UTC Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.891542 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.891610 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:54 crc kubenswrapper[4893]: E0128 15:01:54.891697 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.891722 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:54 crc kubenswrapper[4893]: E0128 15:01:54.891999 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:01:54 crc kubenswrapper[4893]: E0128 15:01:54.892209 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.925945 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.926002 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.926014 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.926036 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:54 crc kubenswrapper[4893]: I0128 15:01:54.926058 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:54Z","lastTransitionTime":"2026-01-28T15:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.028821 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.028891 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.028907 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.028934 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.028950 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:55Z","lastTransitionTime":"2026-01-28T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.116521 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm"] Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.116956 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.119657 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.121140 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.130880 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.130925 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.130940 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.130960 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.130974 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:55Z","lastTransitionTime":"2026-01-28T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.135768 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.146431 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.159323 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.173574 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.185159 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/673baa26-aa9b-4740-b00a-27d20d947fc4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9hnxm\" (UID: \"673baa26-aa9b-4740-b00a-27d20d947fc4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.185224 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c956w\" (UniqueName: \"kubernetes.io/projected/673baa26-aa9b-4740-b00a-27d20d947fc4-kube-api-access-c956w\") pod \"ovnkube-control-plane-749d76644c-9hnxm\" (UID: \"673baa26-aa9b-4740-b00a-27d20d947fc4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.185275 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/673baa26-aa9b-4740-b00a-27d20d947fc4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9hnxm\" (UID: \"673baa26-aa9b-4740-b00a-27d20d947fc4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.185295 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/673baa26-aa9b-4740-b00a-27d20d947fc4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9hnxm\" (UID: \"673baa26-aa9b-4740-b00a-27d20d947fc4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.187891 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.209263 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.210516 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/0.log" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.213909 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerStarted","Data":"aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256"} Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.214367 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.227514 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.233164 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.233203 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.233216 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.233234 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.233246 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:55Z","lastTransitionTime":"2026-01-28T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.242412 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.256596 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.271593 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.286360 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/673baa26-aa9b-4740-b00a-27d20d947fc4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9hnxm\" (UID: \"673baa26-aa9b-4740-b00a-27d20d947fc4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.286417 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/673baa26-aa9b-4740-b00a-27d20d947fc4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9hnxm\" (UID: \"673baa26-aa9b-4740-b00a-27d20d947fc4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.286522 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/673baa26-aa9b-4740-b00a-27d20d947fc4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9hnxm\" (UID: \"673baa26-aa9b-4740-b00a-27d20d947fc4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.286574 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c956w\" (UniqueName: \"kubernetes.io/projected/673baa26-aa9b-4740-b00a-27d20d947fc4-kube-api-access-c956w\") pod \"ovnkube-control-plane-749d76644c-9hnxm\" (UID: \"673baa26-aa9b-4740-b00a-27d20d947fc4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.287182 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/673baa26-aa9b-4740-b00a-27d20d947fc4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9hnxm\" (UID: \"673baa26-aa9b-4740-b00a-27d20d947fc4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.287676 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/673baa26-aa9b-4740-b00a-27d20d947fc4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9hnxm\" (UID: \"673baa26-aa9b-4740-b00a-27d20d947fc4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.292161 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.294812 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/673baa26-aa9b-4740-b00a-27d20d947fc4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9hnxm\" (UID: \"673baa26-aa9b-4740-b00a-27d20d947fc4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.303052 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c956w\" (UniqueName: \"kubernetes.io/projected/673baa26-aa9b-4740-b00a-27d20d947fc4-kube-api-access-c956w\") pod \"ovnkube-control-plane-749d76644c-9hnxm\" (UID: \"673baa26-aa9b-4740-b00a-27d20d947fc4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.309371 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.323785 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.336307 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.336345 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.336356 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.336373 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.336383 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:55Z","lastTransitionTime":"2026-01-28T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.338861 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.357407 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:53Z\\\",\\\"message\\\":\\\"53.845419 6196 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:01:53.845652 6196 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:01:53.845781 6196 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.845935 6196 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.846218 6196 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:01:53.846738 6196 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.846867 6196 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.847346 6196 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:01:53.847384 6196 factory.go:656] Stopping watch factory\\\\nI0128 15:01:53.847408 6196 ovnkube.go:599] Stopped ovnkube\\\\nI0128 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.371013 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.385383 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.402086 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.415905 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.430246 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.430735 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.440873 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.440956 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.440978 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.441007 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.441026 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:55Z","lastTransitionTime":"2026-01-28T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:55 crc kubenswrapper[4893]: W0128 15:01:55.443144 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod673baa26_aa9b_4740_b00a_27d20d947fc4.slice/crio-cad3ec55b5aee7c6f18031305a546d3af20f26c6336838931a4c7b59b00f24ec WatchSource:0}: Error finding container cad3ec55b5aee7c6f18031305a546d3af20f26c6336838931a4c7b59b00f24ec: Status 404 returned error can't find the container with id cad3ec55b5aee7c6f18031305a546d3af20f26c6336838931a4c7b59b00f24ec Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.447835 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.465638 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.488575 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.507653 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:53Z\\\",\\\"message\\\":\\\"53.845419 6196 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:01:53.845652 6196 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:01:53.845781 6196 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.845935 6196 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.846218 6196 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:01:53.846738 6196 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.846867 6196 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.847346 6196 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:01:53.847384 6196 factory.go:656] Stopping watch factory\\\\nI0128 15:01:53.847408 6196 ovnkube.go:599] Stopped ovnkube\\\\nI0128 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.525698 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.541385 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.544153 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.544222 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.544235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.544281 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.544295 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:55Z","lastTransitionTime":"2026-01-28T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.557074 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.570915 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.583001 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.598500 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.647094 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.647136 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.647147 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.647163 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.647172 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:55Z","lastTransitionTime":"2026-01-28T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.750022 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.750067 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.750077 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.750095 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.750105 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:55Z","lastTransitionTime":"2026-01-28T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.849499 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:46:13.920750927 +0000 UTC Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.853183 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.853225 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.853237 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.853254 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.853264 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:55Z","lastTransitionTime":"2026-01-28T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.955943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.955998 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.956010 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.956034 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:55 crc kubenswrapper[4893]: I0128 15:01:55.956046 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:55Z","lastTransitionTime":"2026-01-28T15:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.058774 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.058823 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.058835 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.058856 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.058869 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:56Z","lastTransitionTime":"2026-01-28T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.161854 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.161920 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.161934 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.161956 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.161967 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:56Z","lastTransitionTime":"2026-01-28T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.219094 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/1.log" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.219976 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/0.log" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.223243 4893 generic.go:334] "Generic (PLEG): container finished" podID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerID="aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256" exitCode=1 Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.223361 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerDied","Data":"aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256"} Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.223490 4893 scope.go:117] "RemoveContainer" containerID="97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.224104 4893 scope.go:117] "RemoveContainer" containerID="aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256" Jan 28 15:01:56 crc kubenswrapper[4893]: E0128 15:01:56.224305 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.225064 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" event={"ID":"673baa26-aa9b-4740-b00a-27d20d947fc4","Type":"ContainerStarted","Data":"73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c"} Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.225110 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" event={"ID":"673baa26-aa9b-4740-b00a-27d20d947fc4","Type":"ContainerStarted","Data":"a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289"} Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.225124 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" event={"ID":"673baa26-aa9b-4740-b00a-27d20d947fc4","Type":"ContainerStarted","Data":"cad3ec55b5aee7c6f18031305a546d3af20f26c6336838931a4c7b59b00f24ec"} Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.242696 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.257143 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.265453 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.265520 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.265537 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.265560 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.265575 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:56Z","lastTransitionTime":"2026-01-28T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.274891 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.294838 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.319533 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:53Z\\\",\\\"message\\\":\\\"53.845419 6196 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:01:53.845652 6196 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:01:53.845781 6196 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.845935 6196 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.846218 6196 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:01:53.846738 6196 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.846867 6196 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.847346 6196 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:01:53.847384 6196 factory.go:656] Stopping watch factory\\\\nI0128 15:01:53.847408 6196 ovnkube.go:599] Stopped ovnkube\\\\nI0128 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"nshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:01:55.096197 6332 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0128 15:01:55.096191 6332 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 15:01:55.096075 6332 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.336705 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.354024 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.368010 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.368064 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.368082 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.368104 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.368120 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:56Z","lastTransitionTime":"2026-01-28T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.371193 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.386992 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.402646 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.418626 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.434732 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.450178 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.464745 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.470773 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.470812 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.470825 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.470844 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.470857 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:56Z","lastTransitionTime":"2026-01-28T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.480954 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.494526 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.509882 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.527327 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.541460 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.561303 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.573542 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.573618 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.573639 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.573670 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.573689 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:56Z","lastTransitionTime":"2026-01-28T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.579322 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.604223 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:53Z\\\",\\\"message\\\":\\\"53.845419 6196 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:01:53.845652 6196 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:01:53.845781 6196 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.845935 6196 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.846218 6196 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:01:53.846738 6196 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.846867 6196 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.847346 6196 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:01:53.847384 6196 factory.go:656] Stopping watch factory\\\\nI0128 15:01:53.847408 6196 ovnkube.go:599] Stopped ovnkube\\\\nI0128 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"nshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:01:55.096197 6332 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0128 15:01:55.096191 6332 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 15:01:55.096075 6332 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.605756 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-dqjfn"] Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.606332 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:01:56 crc kubenswrapper[4893]: E0128 15:01:56.606404 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.619294 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.632540 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.643312 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.658802 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.673598 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.683487 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.683621 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.684296 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.684336 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.684355 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:56Z","lastTransitionTime":"2026-01-28T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.696530 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.702133 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs\") pod \"network-metrics-daemon-dqjfn\" (UID: \"27c2667f-3b81-4103-b924-fd2ec1678757\") " pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.702196 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c28r2\" (UniqueName: \"kubernetes.io/projected/27c2667f-3b81-4103-b924-fd2ec1678757-kube-api-access-c28r2\") pod \"network-metrics-daemon-dqjfn\" (UID: \"27c2667f-3b81-4103-b924-fd2ec1678757\") " pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.708286 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.721560 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.743872 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.769601 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a23d69b3042bf2700cbd26f6ea69a6bc1c6df2e6c99e1e2813b0955fc78ecf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:53Z\\\",\\\"message\\\":\\\"53.845419 6196 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 15:01:53.845652 6196 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:01:53.845781 6196 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.845935 6196 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.846218 6196 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:01:53.846738 6196 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.846867 6196 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:01:53.847346 6196 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:01:53.847384 6196 factory.go:656] Stopping watch factory\\\\nI0128 15:01:53.847408 6196 ovnkube.go:599] Stopped ovnkube\\\\nI0128 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"nshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:01:55.096197 6332 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0128 15:01:55.096191 6332 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 15:01:55.096075 6332 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.781162 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.786591 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.786636 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.786647 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.786854 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.786864 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:56Z","lastTransitionTime":"2026-01-28T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.798536 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.802940 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c28r2\" (UniqueName: \"kubernetes.io/projected/27c2667f-3b81-4103-b924-fd2ec1678757-kube-api-access-c28r2\") pod \"network-metrics-daemon-dqjfn\" (UID: \"27c2667f-3b81-4103-b924-fd2ec1678757\") " pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.803030 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs\") pod \"network-metrics-daemon-dqjfn\" (UID: \"27c2667f-3b81-4103-b924-fd2ec1678757\") " pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:01:56 crc kubenswrapper[4893]: E0128 15:01:56.803149 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:01:56 crc kubenswrapper[4893]: E0128 15:01:56.803216 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs podName:27c2667f-3b81-4103-b924-fd2ec1678757 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:57.303197943 +0000 UTC m=+35.076812971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs") pod "network-metrics-daemon-dqjfn" (UID: "27c2667f-3b81-4103-b924-fd2ec1678757") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.812110 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.823209 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.829558 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c28r2\" (UniqueName: \"kubernetes.io/projected/27c2667f-3b81-4103-b924-fd2ec1678757-kube-api-access-c28r2\") pod \"network-metrics-daemon-dqjfn\" (UID: \"27c2667f-3b81-4103-b924-fd2ec1678757\") " pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.836211 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.849208 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.850061 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:41:28.022640356 +0000 UTC Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.862910 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.874036 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.888169 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.888838 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.888887 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.888899 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.888921 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.888936 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:56Z","lastTransitionTime":"2026-01-28T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.891023 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.891043 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.891064 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:56 crc kubenswrapper[4893]: E0128 15:01:56.891156 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:01:56 crc kubenswrapper[4893]: E0128 15:01:56.891238 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:01:56 crc kubenswrapper[4893]: E0128 15:01:56.891334 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.903119 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.913448 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.924486 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.937372 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.951275 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:56Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.991326 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.991371 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.991384 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.991402 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:56 crc kubenswrapper[4893]: I0128 15:01:56.991415 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:56Z","lastTransitionTime":"2026-01-28T15:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.094030 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.094110 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.094124 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.094142 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.094154 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:57Z","lastTransitionTime":"2026-01-28T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.196530 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.196569 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.196583 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.196601 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.196613 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:57Z","lastTransitionTime":"2026-01-28T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.230277 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/1.log" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.234500 4893 scope.go:117] "RemoveContainer" containerID="aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256" Jan 28 15:01:57 crc kubenswrapper[4893]: E0128 15:01:57.234715 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.249719 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.272429 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.288215 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.299646 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.299690 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.299701 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.299719 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.299731 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:57Z","lastTransitionTime":"2026-01-28T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.303920 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.307445 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs\") pod \"network-metrics-daemon-dqjfn\" (UID: \"27c2667f-3b81-4103-b924-fd2ec1678757\") " pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:01:57 crc kubenswrapper[4893]: E0128 15:01:57.307600 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:01:57 crc kubenswrapper[4893]: E0128 15:01:57.307679 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs podName:27c2667f-3b81-4103-b924-fd2ec1678757 nodeName:}" failed. No retries permitted until 2026-01-28 15:01:58.307655702 +0000 UTC m=+36.081270800 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs") pod "network-metrics-daemon-dqjfn" (UID: "27c2667f-3b81-4103-b924-fd2ec1678757") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.318996 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.334624 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.354258 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.367897 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.386166 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.398706 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.402288 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.402320 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.402330 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.402345 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.402355 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:57Z","lastTransitionTime":"2026-01-28T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.412025 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.428940 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.452173 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"nshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:01:55.096197 6332 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0128 15:01:55.096191 6332 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 15:01:55.096075 6332 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.469402 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.481544 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.495968 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:01:57Z is after 2025-08-24T17:21:41Z" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.505343 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.505396 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.505411 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.505434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.505450 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:57Z","lastTransitionTime":"2026-01-28T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.608795 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.608872 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.608892 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.608920 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.608943 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:57Z","lastTransitionTime":"2026-01-28T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.711338 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.711407 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.711423 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.711445 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.711460 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:57Z","lastTransitionTime":"2026-01-28T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.815614 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.815659 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.815670 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.815688 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.815701 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:57Z","lastTransitionTime":"2026-01-28T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.850483 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:51:58.586088175 +0000 UTC Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.891221 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:01:57 crc kubenswrapper[4893]: E0128 15:01:57.891409 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.918662 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.918749 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.918777 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.918817 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:57 crc kubenswrapper[4893]: I0128 15:01:57.918842 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:57Z","lastTransitionTime":"2026-01-28T15:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.021079 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.021128 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.021140 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.021155 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.021165 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:58Z","lastTransitionTime":"2026-01-28T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.124319 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.124364 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.124390 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.124411 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.124424 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:58Z","lastTransitionTime":"2026-01-28T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.227986 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.228043 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.228059 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.228079 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.228093 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:58Z","lastTransitionTime":"2026-01-28T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.319150 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs\") pod \"network-metrics-daemon-dqjfn\" (UID: \"27c2667f-3b81-4103-b924-fd2ec1678757\") " pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.319394 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.319519 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs podName:27c2667f-3b81-4103-b924-fd2ec1678757 nodeName:}" failed. No retries permitted until 2026-01-28 15:02:00.319496861 +0000 UTC m=+38.093111879 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs") pod "network-metrics-daemon-dqjfn" (UID: "27c2667f-3b81-4103-b924-fd2ec1678757") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.331051 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.331129 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.331145 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.331185 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.331198 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:58Z","lastTransitionTime":"2026-01-28T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.434637 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.434727 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.434752 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.434789 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.434814 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:58Z","lastTransitionTime":"2026-01-28T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.538760 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.538822 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.538834 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.538857 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.538870 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:58Z","lastTransitionTime":"2026-01-28T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.624138 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.624304 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.624364 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.624391 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.624402 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.624521 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:02:14.624493647 +0000 UTC m=+52.398108685 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.624569 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.624682 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:02:14.624659271 +0000 UTC m=+52.398274299 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.624712 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.624764 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.624780 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.624795 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.624813 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.624859 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.624879 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:02:14.624854247 +0000 UTC m=+52.398469435 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.624975 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:02:14.624947899 +0000 UTC m=+52.398563127 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.641806 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.641887 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.641910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.641945 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.642024 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:58Z","lastTransitionTime":"2026-01-28T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.725977 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.726282 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:02:14.726255218 +0000 UTC m=+52.499870246 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.744348 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.744416 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.744430 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.744466 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.744494 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:58Z","lastTransitionTime":"2026-01-28T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.847712 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.847775 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.847787 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.847802 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.847813 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:58Z","lastTransitionTime":"2026-01-28T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.850874 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 08:07:11.875100238 +0000 UTC Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.891545 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.891585 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.891649 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.891826 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.891917 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:01:58 crc kubenswrapper[4893]: E0128 15:01:58.892047 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.950222 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.950284 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.950299 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.950318 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:58 crc kubenswrapper[4893]: I0128 15:01:58.950330 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:58Z","lastTransitionTime":"2026-01-28T15:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.052962 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.053016 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.053026 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.053042 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.053053 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:59Z","lastTransitionTime":"2026-01-28T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.154859 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.154910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.154920 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.154938 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.154948 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:59Z","lastTransitionTime":"2026-01-28T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.260793 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.260874 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.261910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.262694 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.262785 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:59Z","lastTransitionTime":"2026-01-28T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.367289 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.367345 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.367363 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.367385 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.367400 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:59Z","lastTransitionTime":"2026-01-28T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.472063 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.472147 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.472158 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.472174 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.472183 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:59Z","lastTransitionTime":"2026-01-28T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.574146 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.574193 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.574205 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.574223 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.574235 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:59Z","lastTransitionTime":"2026-01-28T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.677566 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.677612 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.677621 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.677638 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.677648 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:59Z","lastTransitionTime":"2026-01-28T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.780606 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.780663 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.780672 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.780693 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.780706 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:59Z","lastTransitionTime":"2026-01-28T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.852038 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 18:37:09.020769843 +0000 UTC Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.883398 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.883457 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.883506 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.883537 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.883562 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:59Z","lastTransitionTime":"2026-01-28T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.891743 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:01:59 crc kubenswrapper[4893]: E0128 15:01:59.891932 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.986333 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.986407 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.986425 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.986447 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:01:59 crc kubenswrapper[4893]: I0128 15:01:59.986464 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:01:59Z","lastTransitionTime":"2026-01-28T15:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.089610 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.089709 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.089737 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.089840 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.089877 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:00Z","lastTransitionTime":"2026-01-28T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.193109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.193187 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.193211 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.193280 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.193305 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:00Z","lastTransitionTime":"2026-01-28T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.296679 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.296792 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.296828 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.296864 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.296884 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:00Z","lastTransitionTime":"2026-01-28T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.345418 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs\") pod \"network-metrics-daemon-dqjfn\" (UID: \"27c2667f-3b81-4103-b924-fd2ec1678757\") " pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:00 crc kubenswrapper[4893]: E0128 15:02:00.345673 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:02:00 crc kubenswrapper[4893]: E0128 15:02:00.345802 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs podName:27c2667f-3b81-4103-b924-fd2ec1678757 nodeName:}" failed. No retries permitted until 2026-01-28 15:02:04.345770886 +0000 UTC m=+42.119386114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs") pod "network-metrics-daemon-dqjfn" (UID: "27c2667f-3b81-4103-b924-fd2ec1678757") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.399700 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.399781 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.399796 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.399815 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.399825 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:00Z","lastTransitionTime":"2026-01-28T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.502979 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.503029 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.503044 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.503063 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.503075 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:00Z","lastTransitionTime":"2026-01-28T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.605611 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.605668 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.605681 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.605702 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.605715 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:00Z","lastTransitionTime":"2026-01-28T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.709192 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.709253 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.709267 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.709290 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.709305 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:00Z","lastTransitionTime":"2026-01-28T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.813122 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.813278 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.813364 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.813442 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.813530 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:00Z","lastTransitionTime":"2026-01-28T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.852561 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:00:10.952967365 +0000 UTC Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.891595 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.891638 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.891656 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:00 crc kubenswrapper[4893]: E0128 15:02:00.891958 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:00 crc kubenswrapper[4893]: E0128 15:02:00.892075 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:00 crc kubenswrapper[4893]: E0128 15:02:00.892233 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.893174 4893 scope.go:117] "RemoveContainer" containerID="fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.916359 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.916394 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.916402 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.916418 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:00 crc kubenswrapper[4893]: I0128 15:02:00.916430 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:00Z","lastTransitionTime":"2026-01-28T15:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.019829 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.019874 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.019886 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.019909 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.019923 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:01Z","lastTransitionTime":"2026-01-28T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.124115 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.124172 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.124184 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.124204 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.124217 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:01Z","lastTransitionTime":"2026-01-28T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.227621 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.228189 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.228205 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.228236 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.228253 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:01Z","lastTransitionTime":"2026-01-28T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.251633 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.253945 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380"} Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.254498 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.277236 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.295842 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.313465 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.331905 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.331992 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.332021 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.332059 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.332085 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:01Z","lastTransitionTime":"2026-01-28T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.334381 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.358731 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"nshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:01:55.096197 6332 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0128 15:01:55.096191 6332 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 15:01:55.096075 6332 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.377667 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.395942 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.412049 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.428520 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.434350 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.434404 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.434420 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.434442 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.434458 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:01Z","lastTransitionTime":"2026-01-28T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.447452 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.464889 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.482116 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.497429 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.510039 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.524815 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.536439 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.536502 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.536514 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.536534 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.536552 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:01Z","lastTransitionTime":"2026-01-28T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.538536 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:01Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.639932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.639971 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.639983 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.639999 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.640010 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:01Z","lastTransitionTime":"2026-01-28T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.742284 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.742328 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.742343 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.742361 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.742372 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:01Z","lastTransitionTime":"2026-01-28T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.845560 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.845620 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.845636 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.845659 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.845675 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:01Z","lastTransitionTime":"2026-01-28T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.852859 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:48:45.581377415 +0000 UTC Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.891686 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:01 crc kubenswrapper[4893]: E0128 15:02:01.891956 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.949987 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.950067 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.950115 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.950149 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:01 crc kubenswrapper[4893]: I0128 15:02:01.950171 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:01Z","lastTransitionTime":"2026-01-28T15:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.054300 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.054345 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.054355 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.054373 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.054386 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:02Z","lastTransitionTime":"2026-01-28T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.157203 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.157241 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.157251 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.157268 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.157278 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:02Z","lastTransitionTime":"2026-01-28T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.259733 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.259782 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.259792 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.259808 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.259821 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:02Z","lastTransitionTime":"2026-01-28T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.363599 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.363722 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.363783 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.363817 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.363837 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:02Z","lastTransitionTime":"2026-01-28T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.469095 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.469169 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.469193 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.469267 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.469293 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:02Z","lastTransitionTime":"2026-01-28T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.573021 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.573086 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.573101 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.573122 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.573134 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:02Z","lastTransitionTime":"2026-01-28T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.677153 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.677233 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.677247 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.677268 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.677280 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:02Z","lastTransitionTime":"2026-01-28T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.781131 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.781227 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.781269 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.781320 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.781347 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:02Z","lastTransitionTime":"2026-01-28T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.854200 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:30:03.879826791 +0000 UTC Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.884271 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.884354 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.884367 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.884385 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.884396 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:02Z","lastTransitionTime":"2026-01-28T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.891721 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.891774 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:02 crc kubenswrapper[4893]: E0128 15:02:02.891840 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:02 crc kubenswrapper[4893]: E0128 15:02:02.891964 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.892287 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:02 crc kubenswrapper[4893]: E0128 15:02:02.892350 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.905383 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.905444 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.905463 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.905554 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.905590 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:02Z","lastTransitionTime":"2026-01-28T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.907010 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.923667 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:02 crc kubenswrapper[4893]: E0128 15:02:02.925320 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.932841 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.932923 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.932936 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.932961 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.932979 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:02Z","lastTransitionTime":"2026-01-28T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.951665 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:02 crc kubenswrapper[4893]: E0128 15:02:02.956808 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.963392 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.963449 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.963465 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.963510 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.963525 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:02Z","lastTransitionTime":"2026-01-28T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.981100 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"nshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:01:55.096197 6332 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0128 15:01:55.096191 6332 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 15:01:55.096075 6332 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:02 crc kubenswrapper[4893]: E0128 15:02:02.982935 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.988303 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.988348 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.988359 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.988377 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:02 crc kubenswrapper[4893]: I0128 15:02:02.988391 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:02Z","lastTransitionTime":"2026-01-28T15:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.000680 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:03 crc kubenswrapper[4893]: E0128 15:02:03.001372 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.005526 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.005553 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.005565 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.005584 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.005596 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:03Z","lastTransitionTime":"2026-01-28T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.016019 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:03 crc kubenswrapper[4893]: E0128 15:02:03.020598 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:03 crc kubenswrapper[4893]: E0128 15:02:03.020832 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.022644 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.022712 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.022725 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.022744 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.022757 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:03Z","lastTransitionTime":"2026-01-28T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.033416 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.050239 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.066776 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.080304 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.097642 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.115225 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.126572 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.126641 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.126656 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.126677 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.126689 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:03Z","lastTransitionTime":"2026-01-28T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.131432 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.149883 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.165306 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.179204 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.229627 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.229681 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.229693 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.229710 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.229724 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:03Z","lastTransitionTime":"2026-01-28T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.331765 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.331798 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.331808 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.331824 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.331834 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:03Z","lastTransitionTime":"2026-01-28T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.434797 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.434867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.434888 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.434924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.434944 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:03Z","lastTransitionTime":"2026-01-28T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.538142 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.538196 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.538213 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.538236 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.538252 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:03Z","lastTransitionTime":"2026-01-28T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.641692 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.642109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.642256 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.642397 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.642564 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:03Z","lastTransitionTime":"2026-01-28T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.747090 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.747197 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.747217 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.747282 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.747303 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:03Z","lastTransitionTime":"2026-01-28T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.850356 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.850407 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.850418 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.850434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.850446 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:03Z","lastTransitionTime":"2026-01-28T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.854746 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:22:16.973748037 +0000 UTC Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.891197 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:03 crc kubenswrapper[4893]: E0128 15:02:03.891522 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.953209 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.953269 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.953281 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.953300 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:03 crc kubenswrapper[4893]: I0128 15:02:03.953310 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:03Z","lastTransitionTime":"2026-01-28T15:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.056332 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.056398 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.056425 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.056464 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.056503 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:04Z","lastTransitionTime":"2026-01-28T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.159456 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.159529 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.159568 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.159587 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.159601 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:04Z","lastTransitionTime":"2026-01-28T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.262382 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.262423 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.262436 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.262452 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.262464 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:04Z","lastTransitionTime":"2026-01-28T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.366526 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.366623 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.366644 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.366671 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.366690 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:04Z","lastTransitionTime":"2026-01-28T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.397359 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs\") pod \"network-metrics-daemon-dqjfn\" (UID: \"27c2667f-3b81-4103-b924-fd2ec1678757\") " pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:04 crc kubenswrapper[4893]: E0128 15:02:04.397697 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:02:04 crc kubenswrapper[4893]: E0128 15:02:04.397866 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs podName:27c2667f-3b81-4103-b924-fd2ec1678757 nodeName:}" failed. No retries permitted until 2026-01-28 15:02:12.397826464 +0000 UTC m=+50.171441522 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs") pod "network-metrics-daemon-dqjfn" (UID: "27c2667f-3b81-4103-b924-fd2ec1678757") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.469313 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.469425 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.469448 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.469504 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.469525 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:04Z","lastTransitionTime":"2026-01-28T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.572326 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.572405 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.572435 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.572505 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.572550 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:04Z","lastTransitionTime":"2026-01-28T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.675276 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.675339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.675358 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.675386 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.675401 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:04Z","lastTransitionTime":"2026-01-28T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.778835 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.778909 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.778939 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.778974 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.778997 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:04Z","lastTransitionTime":"2026-01-28T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.855654 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:07:17.317429859 +0000 UTC Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.882579 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.882659 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.882684 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.882713 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.882734 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:04Z","lastTransitionTime":"2026-01-28T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.890963 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.891013 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.891047 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:04 crc kubenswrapper[4893]: E0128 15:02:04.891233 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:04 crc kubenswrapper[4893]: E0128 15:02:04.891365 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:04 crc kubenswrapper[4893]: E0128 15:02:04.891598 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.985714 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.985791 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.985808 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.985828 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:04 crc kubenswrapper[4893]: I0128 15:02:04.985839 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:04Z","lastTransitionTime":"2026-01-28T15:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.089448 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.089561 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.089587 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.089620 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.089640 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:05Z","lastTransitionTime":"2026-01-28T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.193144 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.193183 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.193196 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.193214 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.193226 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:05Z","lastTransitionTime":"2026-01-28T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.295757 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.295816 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.295832 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.295853 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.295865 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:05Z","lastTransitionTime":"2026-01-28T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.398914 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.398986 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.399006 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.399039 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.399061 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:05Z","lastTransitionTime":"2026-01-28T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.503058 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.503444 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.503687 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.503833 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.503973 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:05Z","lastTransitionTime":"2026-01-28T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.607456 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.607760 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.607907 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.608031 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.608103 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:05Z","lastTransitionTime":"2026-01-28T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.713146 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.713599 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.713810 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.714060 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.714319 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:05Z","lastTransitionTime":"2026-01-28T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.818376 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.818425 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.818440 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.818460 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.818596 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:05Z","lastTransitionTime":"2026-01-28T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.856366 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 16:10:28.703550856 +0000 UTC Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.891415 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:05 crc kubenswrapper[4893]: E0128 15:02:05.891649 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.922157 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.922228 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.922278 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.922311 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:05 crc kubenswrapper[4893]: I0128 15:02:05.922333 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:05Z","lastTransitionTime":"2026-01-28T15:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.026024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.026359 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.026463 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.026612 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.026706 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:06Z","lastTransitionTime":"2026-01-28T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.129796 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.129837 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.129847 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.129862 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.129872 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:06Z","lastTransitionTime":"2026-01-28T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.233259 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.233308 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.233325 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.233348 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.233365 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:06Z","lastTransitionTime":"2026-01-28T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.336626 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.336692 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.336708 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.336733 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.336753 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:06Z","lastTransitionTime":"2026-01-28T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.439372 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.439442 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.439466 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.439570 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.439601 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:06Z","lastTransitionTime":"2026-01-28T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.542046 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.542106 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.542119 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.542138 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.542151 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:06Z","lastTransitionTime":"2026-01-28T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.644859 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.644905 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.644916 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.644932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.644940 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:06Z","lastTransitionTime":"2026-01-28T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.747611 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.747662 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.747672 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.747692 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.747767 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:06Z","lastTransitionTime":"2026-01-28T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.850803 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.850873 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.850891 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.850917 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.850936 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:06Z","lastTransitionTime":"2026-01-28T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.857221 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:13:35.214900789 +0000 UTC Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.891811 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:06 crc kubenswrapper[4893]: E0128 15:02:06.892015 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.892545 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:06 crc kubenswrapper[4893]: E0128 15:02:06.892658 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.892735 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:06 crc kubenswrapper[4893]: E0128 15:02:06.892813 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.953396 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.953457 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.953500 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.953525 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:06 crc kubenswrapper[4893]: I0128 15:02:06.953542 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:06Z","lastTransitionTime":"2026-01-28T15:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.056717 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.056801 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.056818 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.056846 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.056866 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:07Z","lastTransitionTime":"2026-01-28T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.160857 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.160924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.160940 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.160963 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.160978 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:07Z","lastTransitionTime":"2026-01-28T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.264122 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.264180 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.264194 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.264215 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.264230 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:07Z","lastTransitionTime":"2026-01-28T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.369749 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.369867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.369892 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.369928 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.369950 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:07Z","lastTransitionTime":"2026-01-28T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.474027 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.474093 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.474111 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.474134 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.474156 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:07Z","lastTransitionTime":"2026-01-28T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.577069 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.577126 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.577138 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.577157 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.577169 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:07Z","lastTransitionTime":"2026-01-28T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.681357 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.681403 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.681415 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.681433 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.681442 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:07Z","lastTransitionTime":"2026-01-28T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.784294 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.784361 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.784380 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.784411 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.784429 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:07Z","lastTransitionTime":"2026-01-28T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.857571 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 09:23:21.841295892 +0000 UTC Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.887455 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.887887 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.887918 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.887958 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.887988 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:07Z","lastTransitionTime":"2026-01-28T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.891880 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:07 crc kubenswrapper[4893]: E0128 15:02:07.892040 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.992074 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.992146 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.992166 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.992195 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:07 crc kubenswrapper[4893]: I0128 15:02:07.992215 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:07Z","lastTransitionTime":"2026-01-28T15:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.094971 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.095028 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.095050 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.095082 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.095105 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:08Z","lastTransitionTime":"2026-01-28T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.198160 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.198215 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.198234 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.198258 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.198275 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:08Z","lastTransitionTime":"2026-01-28T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.300877 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.300952 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.300996 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.301031 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.301059 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:08Z","lastTransitionTime":"2026-01-28T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.404341 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.404386 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.404395 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.404412 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.404420 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:08Z","lastTransitionTime":"2026-01-28T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.507814 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.507862 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.507875 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.507894 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.507906 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:08Z","lastTransitionTime":"2026-01-28T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.611645 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.611720 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.611745 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.611775 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.611796 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:08Z","lastTransitionTime":"2026-01-28T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.714270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.714338 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.714351 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.714372 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.714387 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:08Z","lastTransitionTime":"2026-01-28T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.817112 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.817157 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.817167 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.817184 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.817192 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:08Z","lastTransitionTime":"2026-01-28T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.858558 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:43:24.938280667 +0000 UTC Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.891538 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.891625 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.891663 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:08 crc kubenswrapper[4893]: E0128 15:02:08.891829 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:08 crc kubenswrapper[4893]: E0128 15:02:08.891982 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:08 crc kubenswrapper[4893]: E0128 15:02:08.892050 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.919400 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.919606 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.919687 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.919750 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:08 crc kubenswrapper[4893]: I0128 15:02:08.919811 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:08Z","lastTransitionTime":"2026-01-28T15:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.022675 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.023002 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.023089 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.023185 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.023263 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:09Z","lastTransitionTime":"2026-01-28T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.126846 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.126911 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.126935 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.126964 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.126983 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:09Z","lastTransitionTime":"2026-01-28T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.229011 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.229054 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.229063 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.229079 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.229089 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:09Z","lastTransitionTime":"2026-01-28T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.331928 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.331970 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.331980 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.331995 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.332007 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:09Z","lastTransitionTime":"2026-01-28T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.434691 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.434735 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.434746 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.434763 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.434783 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:09Z","lastTransitionTime":"2026-01-28T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.538654 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.539822 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.539845 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.539867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.539881 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:09Z","lastTransitionTime":"2026-01-28T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.642918 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.642986 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.642999 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.643038 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.643054 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:09Z","lastTransitionTime":"2026-01-28T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.746001 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.746045 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.746056 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.746074 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.746085 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:09Z","lastTransitionTime":"2026-01-28T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.849063 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.849113 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.849127 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.849148 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.849163 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:09Z","lastTransitionTime":"2026-01-28T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.859349 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:13:29.296450165 +0000 UTC Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.891142 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:09 crc kubenswrapper[4893]: E0128 15:02:09.891297 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.951549 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.951615 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.951632 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.951651 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:09 crc kubenswrapper[4893]: I0128 15:02:09.951661 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:09Z","lastTransitionTime":"2026-01-28T15:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.054498 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.054533 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.054543 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.054557 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.054566 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:10Z","lastTransitionTime":"2026-01-28T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.158042 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.158113 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.158137 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.158167 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.158190 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:10Z","lastTransitionTime":"2026-01-28T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.212509 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.223338 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.241609 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.261456 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.261522 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.261533 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.261549 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.261577 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:10Z","lastTransitionTime":"2026-01-28T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.263542 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.283324 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.296946 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.309503 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.325529 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.340452 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.355385 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.364702 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.364782 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.364940 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.365024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.365059 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:10Z","lastTransitionTime":"2026-01-28T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.369183 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.381833 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.399841 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.424784 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"nshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:01:55.096197 6332 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0128 15:01:55.096191 6332 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 15:01:55.096075 6332 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.435140 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.446847 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.457466 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.467525 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.467589 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.467603 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.467625 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.467636 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:10Z","lastTransitionTime":"2026-01-28T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.469848 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:10Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.570495 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.570550 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.570566 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.570588 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.570645 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:10Z","lastTransitionTime":"2026-01-28T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.672722 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.672754 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.672767 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.672785 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.672796 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:10Z","lastTransitionTime":"2026-01-28T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.775084 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.775134 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.775152 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.775174 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.775219 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:10Z","lastTransitionTime":"2026-01-28T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.859597 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 17:17:37.507697593 +0000 UTC Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.877763 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.877837 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.877865 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.877898 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.877922 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:10Z","lastTransitionTime":"2026-01-28T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.890940 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.891014 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:10 crc kubenswrapper[4893]: E0128 15:02:10.891081 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.890943 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:10 crc kubenswrapper[4893]: E0128 15:02:10.891250 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:10 crc kubenswrapper[4893]: E0128 15:02:10.891427 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.980276 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.980324 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.980339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.980358 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:10 crc kubenswrapper[4893]: I0128 15:02:10.980373 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:10Z","lastTransitionTime":"2026-01-28T15:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.084508 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.084557 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.084570 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.084590 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.084604 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:11Z","lastTransitionTime":"2026-01-28T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.187854 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.187935 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.187955 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.187981 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.188001 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:11Z","lastTransitionTime":"2026-01-28T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.295142 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.295508 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.295640 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.295813 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.295942 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:11Z","lastTransitionTime":"2026-01-28T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.399436 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.399499 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.399512 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.399529 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.399544 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:11Z","lastTransitionTime":"2026-01-28T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.502506 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.502545 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.502555 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.502581 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.502592 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:11Z","lastTransitionTime":"2026-01-28T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.605076 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.605122 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.605135 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.605153 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.605167 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:11Z","lastTransitionTime":"2026-01-28T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.709055 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.709118 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.709137 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.709164 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.709182 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:11Z","lastTransitionTime":"2026-01-28T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.812609 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.812651 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.812665 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.812680 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.812690 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:11Z","lastTransitionTime":"2026-01-28T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.860373 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:44:18.38382255 +0000 UTC Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.891307 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:11 crc kubenswrapper[4893]: E0128 15:02:11.891609 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.892678 4893 scope.go:117] "RemoveContainer" containerID="aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.916951 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.917205 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.917372 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.917539 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:11 crc kubenswrapper[4893]: I0128 15:02:11.917700 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:11Z","lastTransitionTime":"2026-01-28T15:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.020644 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.020680 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.020691 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.020712 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.020725 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:12Z","lastTransitionTime":"2026-01-28T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.122956 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.122980 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.122989 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.123013 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.123022 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:12Z","lastTransitionTime":"2026-01-28T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.225354 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.225382 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.225391 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.225404 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.225412 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:12Z","lastTransitionTime":"2026-01-28T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.296490 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/1.log" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.300424 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerStarted","Data":"89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b"} Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.301558 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.327853 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.329034 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.329086 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.329104 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.329124 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.329138 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:12Z","lastTransitionTime":"2026-01-28T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.347004 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.366828 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.404907 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.431358 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.431406 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.431424 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.431448 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.431466 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:12Z","lastTransitionTime":"2026-01-28T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.432992 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"nshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:01:55.096197 6332 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0128 15:01:55.096191 6332 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 15:01:55.096075 6332 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.454684 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.470008 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.479461 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.491228 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs\") pod \"network-metrics-daemon-dqjfn\" (UID: \"27c2667f-3b81-4103-b924-fd2ec1678757\") " pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.491325 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: E0128 15:02:12.491442 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:02:12 crc kubenswrapper[4893]: E0128 15:02:12.491523 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs podName:27c2667f-3b81-4103-b924-fd2ec1678757 nodeName:}" failed. No retries permitted until 2026-01-28 15:02:28.491505579 +0000 UTC m=+66.265120607 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs") pod "network-metrics-daemon-dqjfn" (UID: "27c2667f-3b81-4103-b924-fd2ec1678757") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.503202 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.516924 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.528455 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.534145 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.534188 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.534199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.534215 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.534227 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:12Z","lastTransitionTime":"2026-01-28T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.545675 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.559124 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.569567 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.589303 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.601313 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.636962 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.637014 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.637027 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.637047 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.637060 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:12Z","lastTransitionTime":"2026-01-28T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.739760 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.739812 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.739824 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.739841 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.739853 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:12Z","lastTransitionTime":"2026-01-28T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.842397 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.842426 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.842436 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.842451 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.842462 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:12Z","lastTransitionTime":"2026-01-28T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.861326 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:16:26.441405334 +0000 UTC Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.891300 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.891322 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:12 crc kubenswrapper[4893]: E0128 15:02:12.891633 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.891328 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:12 crc kubenswrapper[4893]: E0128 15:02:12.891744 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:12 crc kubenswrapper[4893]: E0128 15:02:12.891810 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.910820 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.923220 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.937376 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.945071 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.945121 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.945138 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.945160 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.945179 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:12Z","lastTransitionTime":"2026-01-28T15:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.954724 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.969381 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:12 crc kubenswrapper[4893]: I0128 15:02:12.983724 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:12Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.005940 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.020649 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.033614 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.046516 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.048792 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.048847 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.048862 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.048885 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.048901 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:13Z","lastTransitionTime":"2026-01-28T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.059045 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.071169 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.090106 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.112402 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"nshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:01:55.096197 6332 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0128 15:01:55.096191 6332 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 15:01:55.096075 6332 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.125661 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.138047 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.151558 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.151605 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.151618 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.151639 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.151652 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:13Z","lastTransitionTime":"2026-01-28T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.151663 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.254340 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.254493 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.254507 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.254589 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.254628 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:13Z","lastTransitionTime":"2026-01-28T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.304198 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/2.log" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.305009 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/1.log" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.307604 4893 generic.go:334] "Generic (PLEG): container finished" podID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerID="89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b" exitCode=1 Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.307738 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerDied","Data":"89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b"} Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.307854 4893 scope.go:117] "RemoveContainer" containerID="aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.308292 4893 scope.go:117] "RemoveContainer" containerID="89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b" Jan 28 15:02:13 crc kubenswrapper[4893]: E0128 15:02:13.308760 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.321932 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.334233 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.347995 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.356940 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.356978 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.356987 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.357007 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.357018 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:13Z","lastTransitionTime":"2026-01-28T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.361494 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.374334 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.390923 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.404057 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.413857 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.413910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.413919 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.413938 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.413948 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:13Z","lastTransitionTime":"2026-01-28T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.423137 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: E0128 15:02:13.429775 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.433667 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.433722 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.433735 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.433752 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.433762 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:13Z","lastTransitionTime":"2026-01-28T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.441829 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: E0128 15:02:13.448955 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.453768 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.453836 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.453853 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.453869 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.453881 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:13Z","lastTransitionTime":"2026-01-28T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.458274 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: E0128 15:02:13.467444 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.470911 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.470940 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.470951 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.470970 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.470981 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:13Z","lastTransitionTime":"2026-01-28T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.475361 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: E0128 15:02:13.483256 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.486789 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.486840 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.486860 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.486880 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.486893 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:13Z","lastTransitionTime":"2026-01-28T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.487536 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: E0128 15:02:13.499945 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: E0128 15:02:13.500066 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.500228 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.501970 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.501994 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.502007 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.502024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.502037 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:13Z","lastTransitionTime":"2026-01-28T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.510908 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.526359 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.548712 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aefad88fce146a71537f97c7d24b173334df3029b961374eac0ce4a7abc1a256\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"message\\\":\\\"nshift-marketplace/community-operators_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0128 15:01:55.096197 6332 services_controller.go:356] Processing sync for service openshift-ingress-operator/metrics for network=default\\\\nI0128 15:01:55.096191 6332 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/package-server-manager-metrics]} name:Service_openshift-operator-lifecycle-manager/package-server-manager-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.110:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f9232b32-e89f-4c8e-acc4-c6801b70dcb0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 15:01:55.096075 6332 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:12Z\\\",\\\"message\\\":\\\"05 6560 factory.go:656] Stopping watch factory\\\\nI0128 15:02:12.779031 6560 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:02:12.779117 6560 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779169 6560 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779212 6560 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779249 6560 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779547 6560 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.783327 6560 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 15:02:12.783354 6560 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 15:02:12.783409 6560 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:02:12.783440 6560 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 15:02:12.783536 6560 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.559273 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:13Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.605958 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.606012 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.606026 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.606048 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.606061 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:13Z","lastTransitionTime":"2026-01-28T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.710039 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.710130 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.710155 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.710185 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.710204 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:13Z","lastTransitionTime":"2026-01-28T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.813053 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.813120 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.813140 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.813212 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.813232 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:13Z","lastTransitionTime":"2026-01-28T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.862449 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:10:57.5681467 +0000 UTC Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.890952 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:13 crc kubenswrapper[4893]: E0128 15:02:13.891233 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.916170 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.916321 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.916342 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.916371 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:13 crc kubenswrapper[4893]: I0128 15:02:13.916391 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:13Z","lastTransitionTime":"2026-01-28T15:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.020249 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.020382 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.020759 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.020838 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.021173 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:14Z","lastTransitionTime":"2026-01-28T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.124773 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.124841 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.124861 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.124887 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.124907 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:14Z","lastTransitionTime":"2026-01-28T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.228838 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.228880 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.228889 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.228904 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.228914 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:14Z","lastTransitionTime":"2026-01-28T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.314928 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/2.log" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.318905 4893 scope.go:117] "RemoveContainer" containerID="89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b" Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.319161 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.332540 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.332617 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.332632 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.332656 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.332670 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:14Z","lastTransitionTime":"2026-01-28T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.341548 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.359205 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.376208 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.394696 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.417728 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:12Z\\\",\\\"message\\\":\\\"05 6560 factory.go:656] Stopping watch factory\\\\nI0128 15:02:12.779031 6560 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:02:12.779117 6560 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779169 6560 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779212 6560 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779249 6560 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779547 6560 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.783327 6560 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 15:02:12.783354 6560 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 15:02:12.783409 6560 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:02:12.783440 6560 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 15:02:12.783536 6560 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:02:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.430365 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.436270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.436355 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.436383 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.436419 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.436448 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:14Z","lastTransitionTime":"2026-01-28T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.455054 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.470687 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.484058 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.498318 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.517650 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.531538 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.538738 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.538794 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.538814 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.538855 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.538873 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:14Z","lastTransitionTime":"2026-01-28T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.550422 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.566589 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.583623 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.600678 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.617747 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:14Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.642347 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.642424 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.642449 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.642510 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.642542 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:14Z","lastTransitionTime":"2026-01-28T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.714372 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.714532 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.714596 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.714654 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.714682 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.714714 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.714752 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.714768 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.714808 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:02:46.714776391 +0000 UTC m=+84.488391599 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.714808 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.714834 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:02:46.714823982 +0000 UTC m=+84.488439220 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.714807 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.714995 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.714904 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:02:46.714879003 +0000 UTC m=+84.488494061 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.715036 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.715165 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:02:46.71513191 +0000 UTC m=+84.488747108 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.745891 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.745958 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.745976 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.745999 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.746020 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:14Z","lastTransitionTime":"2026-01-28T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.815869 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.816275 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:02:46.816227663 +0000 UTC m=+84.589842691 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.849350 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.849430 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.849454 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.849516 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.849539 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:14Z","lastTransitionTime":"2026-01-28T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.862973 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 16:43:43.483472637 +0000 UTC Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.891539 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.891692 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.891555 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.891913 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.891785 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:14 crc kubenswrapper[4893]: E0128 15:02:14.892108 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.953119 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.953194 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.953213 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.953243 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:14 crc kubenswrapper[4893]: I0128 15:02:14.953262 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:14Z","lastTransitionTime":"2026-01-28T15:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.056047 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.056114 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.056127 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.056144 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.056159 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:15Z","lastTransitionTime":"2026-01-28T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.159155 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.159213 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.159226 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.159246 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.159264 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:15Z","lastTransitionTime":"2026-01-28T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.263005 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.263053 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.263066 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.263086 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.263097 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:15Z","lastTransitionTime":"2026-01-28T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.367284 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.367396 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.367415 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.367443 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.367464 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:15Z","lastTransitionTime":"2026-01-28T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.470667 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.470741 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.470767 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.470796 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.470815 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:15Z","lastTransitionTime":"2026-01-28T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.573941 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.574015 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.574034 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.574066 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.574087 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:15Z","lastTransitionTime":"2026-01-28T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.677368 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.677454 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.677507 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.677537 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.677558 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:15Z","lastTransitionTime":"2026-01-28T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.780900 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.780955 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.780969 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.780991 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.781009 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:15Z","lastTransitionTime":"2026-01-28T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.863179 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:44:27.941766991 +0000 UTC Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.883355 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.883406 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.883422 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.883447 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.883462 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:15Z","lastTransitionTime":"2026-01-28T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.891272 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:15 crc kubenswrapper[4893]: E0128 15:02:15.891444 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.986857 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.986922 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.986937 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.986961 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:15 crc kubenswrapper[4893]: I0128 15:02:15.986980 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:15Z","lastTransitionTime":"2026-01-28T15:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.090282 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.090361 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.090377 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.090397 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.090697 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:16Z","lastTransitionTime":"2026-01-28T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.194404 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.194458 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.194494 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.194514 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.194529 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:16Z","lastTransitionTime":"2026-01-28T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.297460 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.297513 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.297523 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.297537 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.297547 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:16Z","lastTransitionTime":"2026-01-28T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.400612 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.400710 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.400731 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.400763 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.400785 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:16Z","lastTransitionTime":"2026-01-28T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.502802 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.502838 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.502848 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.502864 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.502879 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:16Z","lastTransitionTime":"2026-01-28T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.606778 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.606861 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.606889 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.606924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.606951 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:16Z","lastTransitionTime":"2026-01-28T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.709924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.709978 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.709993 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.710009 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.710031 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:16Z","lastTransitionTime":"2026-01-28T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.814002 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.814046 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.814058 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.814078 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.814092 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:16Z","lastTransitionTime":"2026-01-28T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.864084 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:25:51.089195639 +0000 UTC Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.891947 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.892082 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.891964 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:16 crc kubenswrapper[4893]: E0128 15:02:16.892222 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:16 crc kubenswrapper[4893]: E0128 15:02:16.892334 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:16 crc kubenswrapper[4893]: E0128 15:02:16.892450 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.916522 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.916583 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.916594 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.916611 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:16 crc kubenswrapper[4893]: I0128 15:02:16.916623 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:16Z","lastTransitionTime":"2026-01-28T15:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.020030 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.020243 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.020270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.020310 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.020335 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:17Z","lastTransitionTime":"2026-01-28T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.123272 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.123314 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.123324 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.123339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.123348 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:17Z","lastTransitionTime":"2026-01-28T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.226239 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.226297 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.226308 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.226325 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.226335 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:17Z","lastTransitionTime":"2026-01-28T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.328137 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.328186 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.328199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.328213 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.328222 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:17Z","lastTransitionTime":"2026-01-28T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.430743 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.430783 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.430795 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.430811 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.430821 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:17Z","lastTransitionTime":"2026-01-28T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.533344 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.533382 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.533400 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.533419 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.533432 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:17Z","lastTransitionTime":"2026-01-28T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.635561 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.635602 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.635612 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.635626 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.635637 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:17Z","lastTransitionTime":"2026-01-28T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.738705 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.738769 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.738797 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.738828 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.738851 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:17Z","lastTransitionTime":"2026-01-28T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.842705 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.842762 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.842774 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.842793 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.842803 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:17Z","lastTransitionTime":"2026-01-28T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.864518 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 21:05:21.737061195 +0000 UTC Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.890866 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:17 crc kubenswrapper[4893]: E0128 15:02:17.891073 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.945979 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.946024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.946033 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.946051 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:17 crc kubenswrapper[4893]: I0128 15:02:17.946062 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:17Z","lastTransitionTime":"2026-01-28T15:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.049427 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.049509 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.049522 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.049540 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.049558 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:18Z","lastTransitionTime":"2026-01-28T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.127645 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.152654 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.154198 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.154266 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.154289 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.154317 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.154335 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:18Z","lastTransitionTime":"2026-01-28T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.168597 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.184721 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.202405 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.220771 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.238424 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.255837 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.257457 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.257511 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.257526 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.257545 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.257561 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:18Z","lastTransitionTime":"2026-01-28T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.273455 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.297813 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.311842 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.326205 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.339941 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.352087 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.361174 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.361219 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.361234 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.361258 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.361271 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:18Z","lastTransitionTime":"2026-01-28T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.365446 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.384621 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.401343 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.420311 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:12Z\\\",\\\"message\\\":\\\"05 6560 factory.go:656] Stopping watch factory\\\\nI0128 15:02:12.779031 6560 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:02:12.779117 6560 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779169 6560 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779212 6560 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779249 6560 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779547 6560 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.783327 6560 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 15:02:12.783354 6560 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 15:02:12.783409 6560 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:02:12.783440 6560 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 15:02:12.783536 6560 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:02:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:18Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.464329 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.464383 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.464396 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.464415 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.464426 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:18Z","lastTransitionTime":"2026-01-28T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.567122 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.567187 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.567202 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.567228 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.567247 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:18Z","lastTransitionTime":"2026-01-28T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.669945 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.670361 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.670466 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.670599 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.670685 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:18Z","lastTransitionTime":"2026-01-28T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.773306 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.773356 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.773372 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.773396 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.773413 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:18Z","lastTransitionTime":"2026-01-28T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.864734 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:21:40.547113775 +0000 UTC Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.876850 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.876896 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.876908 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.876931 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.876942 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:18Z","lastTransitionTime":"2026-01-28T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.891206 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:18 crc kubenswrapper[4893]: E0128 15:02:18.891321 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.891496 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:18 crc kubenswrapper[4893]: E0128 15:02:18.891542 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.891739 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:18 crc kubenswrapper[4893]: E0128 15:02:18.891915 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.980126 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.980181 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.980193 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.980213 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:18 crc kubenswrapper[4893]: I0128 15:02:18.980223 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:18Z","lastTransitionTime":"2026-01-28T15:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.082773 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.082860 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.082876 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.082925 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.082939 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:19Z","lastTransitionTime":"2026-01-28T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.186091 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.186181 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.186203 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.186236 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.186256 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:19Z","lastTransitionTime":"2026-01-28T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.289383 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.289438 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.289450 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.289467 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.289498 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:19Z","lastTransitionTime":"2026-01-28T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.395354 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.395411 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.395425 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.395441 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.395481 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:19Z","lastTransitionTime":"2026-01-28T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.498444 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.498764 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.498862 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.498954 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.499044 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:19Z","lastTransitionTime":"2026-01-28T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.601039 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.601109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.601134 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.601169 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.601192 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:19Z","lastTransitionTime":"2026-01-28T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.709542 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.709651 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.709674 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.709695 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.709711 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:19Z","lastTransitionTime":"2026-01-28T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.811918 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.811957 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.811968 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.811984 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.811996 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:19Z","lastTransitionTime":"2026-01-28T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.865815 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:24:03.038360798 +0000 UTC Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.891510 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:19 crc kubenswrapper[4893]: E0128 15:02:19.891666 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.914069 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.914119 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.914133 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.914158 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:19 crc kubenswrapper[4893]: I0128 15:02:19.914172 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:19Z","lastTransitionTime":"2026-01-28T15:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.016948 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.016995 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.017005 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.017021 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.017034 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:20Z","lastTransitionTime":"2026-01-28T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.119259 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.119302 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.119311 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.119327 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.119336 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:20Z","lastTransitionTime":"2026-01-28T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.221191 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.221233 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.221245 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.221263 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.221277 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:20Z","lastTransitionTime":"2026-01-28T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.323721 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.323759 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.323770 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.323788 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.323800 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:20Z","lastTransitionTime":"2026-01-28T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.425990 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.426028 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.426040 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.426059 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.426071 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:20Z","lastTransitionTime":"2026-01-28T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.528463 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.528505 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.528515 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.528528 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.528536 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:20Z","lastTransitionTime":"2026-01-28T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.631922 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.631979 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.631993 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.632015 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.632028 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:20Z","lastTransitionTime":"2026-01-28T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.734589 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.734655 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.734668 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.734686 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.734699 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:20Z","lastTransitionTime":"2026-01-28T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.837342 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.837395 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.837411 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.837432 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.837448 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:20Z","lastTransitionTime":"2026-01-28T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.866715 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 03:22:31.381913127 +0000 UTC Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.891343 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.891405 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.891514 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:20 crc kubenswrapper[4893]: E0128 15:02:20.891576 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:20 crc kubenswrapper[4893]: E0128 15:02:20.891695 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:20 crc kubenswrapper[4893]: E0128 15:02:20.891839 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.940419 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.940510 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.940533 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.940556 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:20 crc kubenswrapper[4893]: I0128 15:02:20.940575 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:20Z","lastTransitionTime":"2026-01-28T15:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.043292 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.043333 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.043366 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.043381 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.043394 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:21Z","lastTransitionTime":"2026-01-28T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.146081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.146123 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.146133 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.146149 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.146160 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:21Z","lastTransitionTime":"2026-01-28T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.249266 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.249311 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.249321 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.249339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.249348 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:21Z","lastTransitionTime":"2026-01-28T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.351504 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.351548 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.351560 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.351577 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.351591 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:21Z","lastTransitionTime":"2026-01-28T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.454259 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.454309 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.454326 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.454351 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.454369 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:21Z","lastTransitionTime":"2026-01-28T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.557349 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.557401 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.557414 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.557434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.557448 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:21Z","lastTransitionTime":"2026-01-28T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.660733 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.661025 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.661063 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.661093 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.661119 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:21Z","lastTransitionTime":"2026-01-28T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.764011 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.764064 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.764085 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.764103 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.764113 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:21Z","lastTransitionTime":"2026-01-28T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.866849 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 18:16:09.717926364 +0000 UTC Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.867836 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.867888 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.867903 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.867925 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.867937 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:21Z","lastTransitionTime":"2026-01-28T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.890814 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:21 crc kubenswrapper[4893]: E0128 15:02:21.890970 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.971153 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.971221 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.971238 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.971263 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:21 crc kubenswrapper[4893]: I0128 15:02:21.971281 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:21Z","lastTransitionTime":"2026-01-28T15:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.074076 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.074122 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.074134 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.074154 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.074166 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:22Z","lastTransitionTime":"2026-01-28T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.177136 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.177190 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.177202 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.177218 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.177236 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:22Z","lastTransitionTime":"2026-01-28T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.280202 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.280259 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.280278 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.280299 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.280311 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:22Z","lastTransitionTime":"2026-01-28T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.383311 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.383357 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.383366 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.383383 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.383394 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:22Z","lastTransitionTime":"2026-01-28T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.485645 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.485696 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.485711 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.485729 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.485746 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:22Z","lastTransitionTime":"2026-01-28T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.588335 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.588415 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.588435 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.588460 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.588505 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:22Z","lastTransitionTime":"2026-01-28T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.691097 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.691169 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.691185 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.691205 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.691219 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:22Z","lastTransitionTime":"2026-01-28T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.793568 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.793652 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.793672 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.793698 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.793718 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:22Z","lastTransitionTime":"2026-01-28T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.867561 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:14:32.371395658 +0000 UTC Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.891224 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:22 crc kubenswrapper[4893]: E0128 15:02:22.891442 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.891592 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.891751 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:22 crc kubenswrapper[4893]: E0128 15:02:22.891935 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:22 crc kubenswrapper[4893]: E0128 15:02:22.892154 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.896369 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.896403 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.896415 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.896433 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.896445 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:22Z","lastTransitionTime":"2026-01-28T15:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.906804 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.922718 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.937951 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.956848 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.972465 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:22 crc kubenswrapper[4893]: I0128 15:02:22.994643 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:22Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.006095 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.006171 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.006186 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.006203 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.006416 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.009685 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.027421 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.049407 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.079597 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:12Z\\\",\\\"message\\\":\\\"05 6560 factory.go:656] Stopping watch factory\\\\nI0128 15:02:12.779031 6560 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:02:12.779117 6560 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779169 6560 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779212 6560 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779249 6560 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779547 6560 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.783327 6560 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 15:02:12.783354 6560 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 15:02:12.783409 6560 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:02:12.783440 6560 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 15:02:12.783536 6560 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:02:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.090982 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.104457 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.108831 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.108867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.108877 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.108890 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.108899 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.115234 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.124630 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.133927 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.142440 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.152334 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.210380 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.210413 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.210422 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.210435 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.210445 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.312980 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.313027 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.313038 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.313054 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.313064 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.415729 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.416037 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.416055 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.416080 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.416098 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.518416 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.518515 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.518541 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.518578 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.518601 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.621845 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.621900 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.621912 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.621928 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.621937 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.724370 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.724419 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.724432 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.724460 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.724497 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.754234 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.754267 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.754279 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.754295 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.754307 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:23 crc kubenswrapper[4893]: E0128 15:02:23.769601 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.773185 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.773237 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.773249 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.773272 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.773285 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:23 crc kubenswrapper[4893]: E0128 15:02:23.792118 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.795532 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.795579 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.795591 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.795608 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.795620 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:23 crc kubenswrapper[4893]: E0128 15:02:23.809087 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.813934 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.814019 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.814036 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.814100 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.814115 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:23 crc kubenswrapper[4893]: E0128 15:02:23.831994 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.838956 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.839005 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.839018 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.839036 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.839048 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:23 crc kubenswrapper[4893]: E0128 15:02:23.851359 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:23 crc kubenswrapper[4893]: E0128 15:02:23.851580 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.853455 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.853509 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.853520 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.853536 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.853545 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.868122 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:29:05.661597808 +0000 UTC Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.891615 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:23 crc kubenswrapper[4893]: E0128 15:02:23.891790 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.955950 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.956002 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.956021 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.956044 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:23 crc kubenswrapper[4893]: I0128 15:02:23.956058 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:23Z","lastTransitionTime":"2026-01-28T15:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.059166 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.059204 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.059215 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.059231 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.059241 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:24Z","lastTransitionTime":"2026-01-28T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.162341 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.162440 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.162468 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.162541 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.162562 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:24Z","lastTransitionTime":"2026-01-28T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.265503 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.265575 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.265593 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.265617 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.265636 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:24Z","lastTransitionTime":"2026-01-28T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.368541 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.368583 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.368595 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.368613 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.368624 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:24Z","lastTransitionTime":"2026-01-28T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.471947 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.471983 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.471993 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.472007 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.472018 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:24Z","lastTransitionTime":"2026-01-28T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.574378 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.574424 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.574433 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.574449 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.574458 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:24Z","lastTransitionTime":"2026-01-28T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.677349 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.677409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.677422 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.677443 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.677455 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:24Z","lastTransitionTime":"2026-01-28T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.779942 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.779986 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.779996 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.780019 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.780031 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:24Z","lastTransitionTime":"2026-01-28T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.869024 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:01:43.965906988 +0000 UTC Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.883594 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.883641 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.883654 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.883672 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.883685 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:24Z","lastTransitionTime":"2026-01-28T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.891800 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.891855 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:24 crc kubenswrapper[4893]: E0128 15:02:24.891919 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:24 crc kubenswrapper[4893]: E0128 15:02:24.892013 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.891800 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:24 crc kubenswrapper[4893]: E0128 15:02:24.892081 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.986517 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.986578 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.986593 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.986613 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:24 crc kubenswrapper[4893]: I0128 15:02:24.986627 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:24Z","lastTransitionTime":"2026-01-28T15:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.089627 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.089690 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.089709 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.089734 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.089750 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:25Z","lastTransitionTime":"2026-01-28T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.192737 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.192793 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.192810 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.192834 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.192851 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:25Z","lastTransitionTime":"2026-01-28T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.295354 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.295403 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.295416 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.295434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.295446 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:25Z","lastTransitionTime":"2026-01-28T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.398346 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.398424 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.398462 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.398499 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.398512 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:25Z","lastTransitionTime":"2026-01-28T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.501075 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.501209 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.501235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.501268 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.501292 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:25Z","lastTransitionTime":"2026-01-28T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.604264 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.604348 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.604366 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.604385 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.604397 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:25Z","lastTransitionTime":"2026-01-28T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.707409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.707453 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.707466 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.707503 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.707514 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:25Z","lastTransitionTime":"2026-01-28T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.810231 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.810280 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.810351 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.810378 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.810394 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:25Z","lastTransitionTime":"2026-01-28T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.869583 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:45:29.705035846 +0000 UTC Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.890873 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:25 crc kubenswrapper[4893]: E0128 15:02:25.891089 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.913517 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.913561 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.913579 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.913603 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:25 crc kubenswrapper[4893]: I0128 15:02:25.913619 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:25Z","lastTransitionTime":"2026-01-28T15:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.015201 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.015232 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.015241 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.015254 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.015264 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:26Z","lastTransitionTime":"2026-01-28T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.117992 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.118033 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.118042 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.118055 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.118064 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:26Z","lastTransitionTime":"2026-01-28T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.220979 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.221028 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.221040 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.221057 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.221070 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:26Z","lastTransitionTime":"2026-01-28T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.323819 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.323917 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.323943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.323973 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.323997 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:26Z","lastTransitionTime":"2026-01-28T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.427544 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.427603 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.427613 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.427629 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.427657 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:26Z","lastTransitionTime":"2026-01-28T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.530015 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.530055 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.530065 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.530080 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.530089 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:26Z","lastTransitionTime":"2026-01-28T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.636508 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.636551 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.636574 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.636600 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.636616 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:26Z","lastTransitionTime":"2026-01-28T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.740466 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.740521 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.740531 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.740549 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.740559 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:26Z","lastTransitionTime":"2026-01-28T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.843085 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.843133 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.843144 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.843160 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.843172 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:26Z","lastTransitionTime":"2026-01-28T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.870195 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:18:28.196638093 +0000 UTC Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.891523 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.891564 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:26 crc kubenswrapper[4893]: E0128 15:02:26.891674 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.891711 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:26 crc kubenswrapper[4893]: E0128 15:02:26.891760 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:26 crc kubenswrapper[4893]: E0128 15:02:26.891869 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.945327 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.945374 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.945385 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.945400 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:26 crc kubenswrapper[4893]: I0128 15:02:26.945411 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:26Z","lastTransitionTime":"2026-01-28T15:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.048911 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.048964 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.048978 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.048996 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.049005 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:27Z","lastTransitionTime":"2026-01-28T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.152541 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.152602 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.152625 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.152657 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.152683 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:27Z","lastTransitionTime":"2026-01-28T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.255959 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.256056 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.256074 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.256098 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.256116 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:27Z","lastTransitionTime":"2026-01-28T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.358259 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.358305 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.358316 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.358336 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.358347 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:27Z","lastTransitionTime":"2026-01-28T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.460848 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.460893 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.460904 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.460924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.460936 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:27Z","lastTransitionTime":"2026-01-28T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.563180 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.563240 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.563257 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.563283 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.563300 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:27Z","lastTransitionTime":"2026-01-28T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.666157 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.666216 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.666229 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.666255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.666267 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:27Z","lastTransitionTime":"2026-01-28T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.768827 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.768917 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.768944 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.768981 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.769004 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:27Z","lastTransitionTime":"2026-01-28T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.870505 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:43:12.590274571 +0000 UTC Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.872654 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.872735 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.872762 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.872791 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.872812 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:27Z","lastTransitionTime":"2026-01-28T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.891598 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:27 crc kubenswrapper[4893]: E0128 15:02:27.891870 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.977045 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.977124 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.977145 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.977179 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:27 crc kubenswrapper[4893]: I0128 15:02:27.977199 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:27Z","lastTransitionTime":"2026-01-28T15:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.079657 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.079707 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.079719 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.079736 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.079751 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:28Z","lastTransitionTime":"2026-01-28T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.182632 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.182716 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.182728 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.182747 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.182757 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:28Z","lastTransitionTime":"2026-01-28T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.285593 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.285652 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.285669 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.285690 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.285702 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:28Z","lastTransitionTime":"2026-01-28T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.388187 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.388232 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.388241 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.388258 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.388268 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:28Z","lastTransitionTime":"2026-01-28T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.491130 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.491192 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.491202 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.491218 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.491231 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:28Z","lastTransitionTime":"2026-01-28T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.558933 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs\") pod \"network-metrics-daemon-dqjfn\" (UID: \"27c2667f-3b81-4103-b924-fd2ec1678757\") " pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:28 crc kubenswrapper[4893]: E0128 15:02:28.559145 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:02:28 crc kubenswrapper[4893]: E0128 15:02:28.559215 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs podName:27c2667f-3b81-4103-b924-fd2ec1678757 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:00.559196264 +0000 UTC m=+98.332811292 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs") pod "network-metrics-daemon-dqjfn" (UID: "27c2667f-3b81-4103-b924-fd2ec1678757") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.593989 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.594046 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.594055 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.594072 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.594081 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:28Z","lastTransitionTime":"2026-01-28T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.696823 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.696863 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.696872 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.696888 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.696897 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:28Z","lastTransitionTime":"2026-01-28T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.799683 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.799739 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.799749 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.799767 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.799778 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:28Z","lastTransitionTime":"2026-01-28T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.870665 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:01:01.919839491 +0000 UTC Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.890890 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.890890 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.891002 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:28 crc kubenswrapper[4893]: E0128 15:02:28.891131 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:28 crc kubenswrapper[4893]: E0128 15:02:28.891334 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.892168 4893 scope.go:117] "RemoveContainer" containerID="89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b" Jan 28 15:02:28 crc kubenswrapper[4893]: E0128 15:02:28.892314 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" Jan 28 15:02:28 crc kubenswrapper[4893]: E0128 15:02:28.892435 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.902136 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.902162 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.902170 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.902183 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:28 crc kubenswrapper[4893]: I0128 15:02:28.902191 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:28Z","lastTransitionTime":"2026-01-28T15:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.004652 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.004718 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.004739 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.004764 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.004783 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:29Z","lastTransitionTime":"2026-01-28T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.107358 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.107413 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.107427 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.107448 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.107497 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:29Z","lastTransitionTime":"2026-01-28T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.210424 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.210495 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.210508 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.210526 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.210538 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:29Z","lastTransitionTime":"2026-01-28T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.312276 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.312315 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.312327 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.312345 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.312358 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:29Z","lastTransitionTime":"2026-01-28T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.417106 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.417152 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.417180 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.417208 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.417221 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:29Z","lastTransitionTime":"2026-01-28T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.520717 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.520769 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.520781 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.520798 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.520809 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:29Z","lastTransitionTime":"2026-01-28T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.623885 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.623923 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.623937 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.623953 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.623963 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:29Z","lastTransitionTime":"2026-01-28T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.727808 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.727857 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.727871 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.727888 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.727899 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:29Z","lastTransitionTime":"2026-01-28T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.831715 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.831777 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.831789 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.831810 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.831824 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:29Z","lastTransitionTime":"2026-01-28T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.871285 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 19:48:08.273360573 +0000 UTC Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.891714 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:29 crc kubenswrapper[4893]: E0128 15:02:29.891882 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.934764 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.934827 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.934843 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.934861 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:29 crc kubenswrapper[4893]: I0128 15:02:29.934878 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:29Z","lastTransitionTime":"2026-01-28T15:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.037105 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.037212 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.037235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.037265 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.037286 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:30Z","lastTransitionTime":"2026-01-28T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.139848 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.139897 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.139906 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.139924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.139934 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:30Z","lastTransitionTime":"2026-01-28T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.242650 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.242693 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.242703 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.242718 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.242729 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:30Z","lastTransitionTime":"2026-01-28T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.344972 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.345008 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.345019 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.345035 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.345047 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:30Z","lastTransitionTime":"2026-01-28T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.447405 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.447532 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.447589 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.447607 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.447618 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:30Z","lastTransitionTime":"2026-01-28T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.549897 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.550081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.550206 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.550305 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.550400 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:30Z","lastTransitionTime":"2026-01-28T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.653915 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.653964 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.653974 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.653991 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.654002 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:30Z","lastTransitionTime":"2026-01-28T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.756252 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.756306 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.756319 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.756342 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.756355 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:30Z","lastTransitionTime":"2026-01-28T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.859186 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.859224 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.859235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.859254 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.859268 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:30Z","lastTransitionTime":"2026-01-28T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.871604 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:24:21.199270696 +0000 UTC Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.891647 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.891688 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:30 crc kubenswrapper[4893]: E0128 15:02:30.891809 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.891859 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:30 crc kubenswrapper[4893]: E0128 15:02:30.891967 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:30 crc kubenswrapper[4893]: E0128 15:02:30.892050 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.964077 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.964129 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.964146 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.964171 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:30 crc kubenswrapper[4893]: I0128 15:02:30.964189 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:30Z","lastTransitionTime":"2026-01-28T15:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.066788 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.066837 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.066850 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.066869 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.066882 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:31Z","lastTransitionTime":"2026-01-28T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.169512 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.169554 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.169568 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.169584 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.169598 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:31Z","lastTransitionTime":"2026-01-28T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.272210 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.272246 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.272257 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.272276 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.272288 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:31Z","lastTransitionTime":"2026-01-28T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.371137 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krkz9_a51e5a50-969c-4f25-a895-ebb119642512/kube-multus/0.log" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.371177 4893 generic.go:334] "Generic (PLEG): container finished" podID="a51e5a50-969c-4f25-a895-ebb119642512" containerID="4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0" exitCode=1 Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.371205 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krkz9" event={"ID":"a51e5a50-969c-4f25-a895-ebb119642512","Type":"ContainerDied","Data":"4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0"} Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.371567 4893 scope.go:117] "RemoveContainer" containerID="4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.374261 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.374286 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.374296 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.374312 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.374324 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:31Z","lastTransitionTime":"2026-01-28T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.385369 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.401687 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.414746 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.428128 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.442064 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.453358 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.471260 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.476163 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.476200 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.476212 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.476229 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.476241 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:31Z","lastTransitionTime":"2026-01-28T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.484136 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.498035 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.511164 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.529670 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:30Z\\\",\\\"message\\\":\\\"2026-01-28T15:01:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864\\\\n2026-01-28T15:01:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864 to /host/opt/cni/bin/\\\\n2026-01-28T15:01:45Z [verbose] multus-daemon started\\\\n2026-01-28T15:01:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:02:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.540764 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.551158 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.566112 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.578588 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.578625 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.578636 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.578651 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.578663 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:31Z","lastTransitionTime":"2026-01-28T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.583046 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.600739 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:12Z\\\",\\\"message\\\":\\\"05 6560 factory.go:656] Stopping watch factory\\\\nI0128 15:02:12.779031 6560 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:02:12.779117 6560 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779169 6560 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779212 6560 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779249 6560 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779547 6560 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.783327 6560 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 15:02:12.783354 6560 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 15:02:12.783409 6560 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:02:12.783440 6560 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 15:02:12.783536 6560 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:02:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.610349 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.681180 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.681367 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.681449 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.681544 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.681606 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:31Z","lastTransitionTime":"2026-01-28T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.783914 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.783950 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.783959 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.783975 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.783985 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:31Z","lastTransitionTime":"2026-01-28T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.871872 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:04:31.690114681 +0000 UTC Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.886770 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.886971 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.887063 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.887154 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.887258 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:31Z","lastTransitionTime":"2026-01-28T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.890985 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:31 crc kubenswrapper[4893]: E0128 15:02:31.891102 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.990282 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.990345 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.990358 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.990378 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:31 crc kubenswrapper[4893]: I0128 15:02:31.990394 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:31Z","lastTransitionTime":"2026-01-28T15:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.092999 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.093050 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.093062 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.093082 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.093092 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:32Z","lastTransitionTime":"2026-01-28T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.196290 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.196352 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.196373 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.196398 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.196417 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:32Z","lastTransitionTime":"2026-01-28T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.299303 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.299421 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.299438 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.299458 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.299486 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:32Z","lastTransitionTime":"2026-01-28T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.375973 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krkz9_a51e5a50-969c-4f25-a895-ebb119642512/kube-multus/0.log" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.376025 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krkz9" event={"ID":"a51e5a50-969c-4f25-a895-ebb119642512","Type":"ContainerStarted","Data":"0c04d370bb1ae00208a3c91c24a8d3bf60236155408ef3b6b3224e20ffdd5d0b"} Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.393793 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.402348 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.402388 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.402399 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.402414 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.402426 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:32Z","lastTransitionTime":"2026-01-28T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.407369 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c04d370bb1ae00208a3c91c24a8d3bf60236155408ef3b6b3224e20ffdd5d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:30Z\\\",\\\"message\\\":\\\"2026-01-28T15:01:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864\\\\n2026-01-28T15:01:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864 to /host/opt/cni/bin/\\\\n2026-01-28T15:01:45Z [verbose] multus-daemon started\\\\n2026-01-28T15:01:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:02:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.419370 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.430271 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.447375 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.460259 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.472249 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.482156 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.499996 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.504731 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.504769 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.504782 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.504800 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.504818 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:32Z","lastTransitionTime":"2026-01-28T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.514990 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.525336 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.540659 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.558951 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:12Z\\\",\\\"message\\\":\\\"05 6560 factory.go:656] Stopping watch factory\\\\nI0128 15:02:12.779031 6560 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:02:12.779117 6560 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779169 6560 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779212 6560 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779249 6560 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779547 6560 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.783327 6560 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 15:02:12.783354 6560 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 15:02:12.783409 6560 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:02:12.783440 6560 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 15:02:12.783536 6560 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:02:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.575006 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.584935 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.596550 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.608467 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.608767 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.608881 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.608984 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.609074 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:32Z","lastTransitionTime":"2026-01-28T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.610754 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.712150 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.712180 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.712189 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.712207 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.712217 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:32Z","lastTransitionTime":"2026-01-28T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.815075 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.815352 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.815429 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.815508 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.815579 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:32Z","lastTransitionTime":"2026-01-28T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.873530 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:12:06.25564831 +0000 UTC Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.891388 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:32 crc kubenswrapper[4893]: E0128 15:02:32.891553 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.891766 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:32 crc kubenswrapper[4893]: E0128 15:02:32.891842 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.892058 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:32 crc kubenswrapper[4893]: E0128 15:02:32.892125 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.910033 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:12Z\\\",\\\"message\\\":\\\"05 6560 factory.go:656] Stopping watch factory\\\\nI0128 15:02:12.779031 6560 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:02:12.779117 6560 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779169 6560 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779212 6560 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779249 6560 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779547 6560 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.783327 6560 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 15:02:12.783354 6560 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 15:02:12.783409 6560 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:02:12.783440 6560 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 15:02:12.783536 6560 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:02:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.918594 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.918648 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.918661 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.918684 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.918697 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:32Z","lastTransitionTime":"2026-01-28T15:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.927227 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.939516 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.950500 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.960172 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.975992 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:32 crc kubenswrapper[4893]: I0128 15:02:32.993779 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.008421 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.018552 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.022584 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.022625 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.022636 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.022652 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.022663 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:33Z","lastTransitionTime":"2026-01-28T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.030041 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.042731 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.054306 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.067783 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c04d370bb1ae00208a3c91c24a8d3bf60236155408ef3b6b3224e20ffdd5d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:30Z\\\",\\\"message\\\":\\\"2026-01-28T15:01:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864\\\\n2026-01-28T15:01:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864 to /host/opt/cni/bin/\\\\n2026-01-28T15:01:45Z [verbose] multus-daemon started\\\\n2026-01-28T15:01:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:02:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.079560 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.092655 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.107047 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.118500 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.125423 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.125469 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.125507 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.125525 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.125534 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:33Z","lastTransitionTime":"2026-01-28T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.227948 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.227994 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.228006 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.228026 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.228039 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:33Z","lastTransitionTime":"2026-01-28T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.330582 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.330630 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.330643 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.330662 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.330675 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:33Z","lastTransitionTime":"2026-01-28T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.433349 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.433634 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.433893 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.434119 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.434321 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:33Z","lastTransitionTime":"2026-01-28T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.537317 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.537363 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.537373 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.537390 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.537401 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:33Z","lastTransitionTime":"2026-01-28T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.640521 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.640570 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.640584 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.640603 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.640618 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:33Z","lastTransitionTime":"2026-01-28T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.744423 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.744770 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.744875 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.744985 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.745060 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:33Z","lastTransitionTime":"2026-01-28T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.847681 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.847725 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.847735 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.847753 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.847763 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:33Z","lastTransitionTime":"2026-01-28T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.875133 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:34:05.255165837 +0000 UTC Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.891594 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:33 crc kubenswrapper[4893]: E0128 15:02:33.891743 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.922830 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.922882 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.922894 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.922954 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.922967 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:33Z","lastTransitionTime":"2026-01-28T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:33 crc kubenswrapper[4893]: E0128 15:02:33.936412 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.940620 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.940676 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.940689 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.940710 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.940721 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:33Z","lastTransitionTime":"2026-01-28T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:33 crc kubenswrapper[4893]: E0128 15:02:33.955009 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.959112 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.959142 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.959154 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.959173 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.959184 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:33Z","lastTransitionTime":"2026-01-28T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:33 crc kubenswrapper[4893]: E0128 15:02:33.972986 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.976539 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.976586 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.976598 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.976619 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.976631 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:33Z","lastTransitionTime":"2026-01-28T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:33 crc kubenswrapper[4893]: E0128 15:02:33.988622 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:33Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.991610 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.991637 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.991649 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.991664 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:33 crc kubenswrapper[4893]: I0128 15:02:33.991676 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:33Z","lastTransitionTime":"2026-01-28T15:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:34 crc kubenswrapper[4893]: E0128 15:02:34.002119 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:34 crc kubenswrapper[4893]: E0128 15:02:34.002227 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.003776 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.003809 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.003819 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.003834 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.003844 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:34Z","lastTransitionTime":"2026-01-28T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.106455 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.106500 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.106509 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.106524 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.106533 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:34Z","lastTransitionTime":"2026-01-28T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.209103 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.209156 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.209170 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.209192 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.209202 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:34Z","lastTransitionTime":"2026-01-28T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.311703 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.311782 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.311797 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.311818 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.311855 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:34Z","lastTransitionTime":"2026-01-28T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.414633 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.414975 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.415074 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.415149 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.415211 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:34Z","lastTransitionTime":"2026-01-28T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.518988 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.519042 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.519054 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.519075 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.519086 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:34Z","lastTransitionTime":"2026-01-28T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.622068 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.622352 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.622432 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.622536 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.622601 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:34Z","lastTransitionTime":"2026-01-28T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.724776 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.724817 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.724828 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.724845 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.724857 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:34Z","lastTransitionTime":"2026-01-28T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.827368 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.827403 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.827413 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.827431 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.827461 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:34Z","lastTransitionTime":"2026-01-28T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.875807 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 21:33:25.078842575 +0000 UTC Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.891030 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.891072 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.891044 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:34 crc kubenswrapper[4893]: E0128 15:02:34.891265 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:34 crc kubenswrapper[4893]: E0128 15:02:34.891181 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:34 crc kubenswrapper[4893]: E0128 15:02:34.891570 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.929911 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.929966 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.929979 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.929998 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:34 crc kubenswrapper[4893]: I0128 15:02:34.930010 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:34Z","lastTransitionTime":"2026-01-28T15:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.032255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.032293 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.032331 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.032348 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.032360 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:35Z","lastTransitionTime":"2026-01-28T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.135059 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.135102 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.135115 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.135133 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.135145 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:35Z","lastTransitionTime":"2026-01-28T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.238349 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.238660 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.238745 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.238832 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.239226 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:35Z","lastTransitionTime":"2026-01-28T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.341808 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.341856 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.341865 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.341888 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.341896 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:35Z","lastTransitionTime":"2026-01-28T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.444160 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.444284 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.444306 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.444338 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.444360 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:35Z","lastTransitionTime":"2026-01-28T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.547061 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.547150 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.547180 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.547230 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.547259 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:35Z","lastTransitionTime":"2026-01-28T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.651015 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.651066 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.651087 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.651116 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.651139 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:35Z","lastTransitionTime":"2026-01-28T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.754804 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.754871 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.754890 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.754925 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.754946 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:35Z","lastTransitionTime":"2026-01-28T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.858982 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.859044 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.859059 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.859080 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.859094 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:35Z","lastTransitionTime":"2026-01-28T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.876250 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:35:42.809044837 +0000 UTC Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.891770 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:35 crc kubenswrapper[4893]: E0128 15:02:35.891918 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.961941 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.961987 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.961997 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.962014 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:35 crc kubenswrapper[4893]: I0128 15:02:35.962024 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:35Z","lastTransitionTime":"2026-01-28T15:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.064867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.064909 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.064921 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.064943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.064955 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:36Z","lastTransitionTime":"2026-01-28T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.167460 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.167784 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.167807 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.167827 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.167843 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:36Z","lastTransitionTime":"2026-01-28T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.270260 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.270307 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.270317 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.270335 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.270346 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:36Z","lastTransitionTime":"2026-01-28T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.373371 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.373707 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.373807 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.373894 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.373962 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:36Z","lastTransitionTime":"2026-01-28T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.476633 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.476676 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.476688 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.476704 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.476717 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:36Z","lastTransitionTime":"2026-01-28T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.579553 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.579588 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.579599 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.579615 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.579625 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:36Z","lastTransitionTime":"2026-01-28T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.682294 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.682392 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.682409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.682435 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.682455 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:36Z","lastTransitionTime":"2026-01-28T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.785708 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.785957 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.786071 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.786161 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.786231 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:36Z","lastTransitionTime":"2026-01-28T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.876947 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:33:24.627648756 +0000 UTC Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.889798 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.889874 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.889896 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.889929 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.889949 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:36Z","lastTransitionTime":"2026-01-28T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.890809 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.890892 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.890921 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:36 crc kubenswrapper[4893]: E0128 15:02:36.891076 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:36 crc kubenswrapper[4893]: E0128 15:02:36.891372 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:36 crc kubenswrapper[4893]: E0128 15:02:36.891532 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.992929 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.992977 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.992990 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.993012 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:36 crc kubenswrapper[4893]: I0128 15:02:36.993026 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:36Z","lastTransitionTime":"2026-01-28T15:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.095927 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.096224 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.096398 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.096532 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.096814 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:37Z","lastTransitionTime":"2026-01-28T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.199508 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.199901 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.200036 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.200118 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.200196 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:37Z","lastTransitionTime":"2026-01-28T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.303106 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.303151 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.303161 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.303179 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.303188 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:37Z","lastTransitionTime":"2026-01-28T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.405988 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.406070 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.406082 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.406124 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.406137 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:37Z","lastTransitionTime":"2026-01-28T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.509202 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.509258 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.509270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.509295 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.509311 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:37Z","lastTransitionTime":"2026-01-28T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.612699 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.612766 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.612781 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.612803 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.612815 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:37Z","lastTransitionTime":"2026-01-28T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.716521 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.716582 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.716598 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.716616 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.716631 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:37Z","lastTransitionTime":"2026-01-28T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.819977 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.820058 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.820072 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.820413 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.820437 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:37Z","lastTransitionTime":"2026-01-28T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.878243 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:49:09.000347909 +0000 UTC Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.891605 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:37 crc kubenswrapper[4893]: E0128 15:02:37.891771 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.922802 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.922856 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.922869 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.922889 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:37 crc kubenswrapper[4893]: I0128 15:02:37.922902 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:37Z","lastTransitionTime":"2026-01-28T15:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.026168 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.026207 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.026219 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.026235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.026247 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:38Z","lastTransitionTime":"2026-01-28T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.128724 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.128763 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.128772 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.128788 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.128796 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:38Z","lastTransitionTime":"2026-01-28T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.231870 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.231916 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.231932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.231950 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.231961 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:38Z","lastTransitionTime":"2026-01-28T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.334194 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.334277 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.334290 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.334304 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.334313 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:38Z","lastTransitionTime":"2026-01-28T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.436600 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.436644 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.436660 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.436680 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.436690 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:38Z","lastTransitionTime":"2026-01-28T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.539654 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.539736 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.539754 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.539771 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.539784 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:38Z","lastTransitionTime":"2026-01-28T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.642209 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.642249 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.642259 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.642274 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.642287 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:38Z","lastTransitionTime":"2026-01-28T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.746355 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.747238 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.747555 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.747822 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.748110 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:38Z","lastTransitionTime":"2026-01-28T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.852162 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.852202 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.852214 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.852229 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.852240 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:38Z","lastTransitionTime":"2026-01-28T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.879455 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:59:20.319427531 +0000 UTC Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.892402 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.892409 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.892550 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:38 crc kubenswrapper[4893]: E0128 15:02:38.892669 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:38 crc kubenswrapper[4893]: E0128 15:02:38.892945 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:38 crc kubenswrapper[4893]: E0128 15:02:38.893067 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.956493 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.956566 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.956575 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.956595 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:38 crc kubenswrapper[4893]: I0128 15:02:38.956623 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:38Z","lastTransitionTime":"2026-01-28T15:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.059564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.059647 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.059662 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.059692 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.059710 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:39Z","lastTransitionTime":"2026-01-28T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.164517 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.164572 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.164585 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.164606 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.164617 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:39Z","lastTransitionTime":"2026-01-28T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.267692 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.267768 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.267790 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.267817 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.267834 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:39Z","lastTransitionTime":"2026-01-28T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.376679 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.376747 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.376763 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.376794 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.376812 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:39Z","lastTransitionTime":"2026-01-28T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.479831 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.479899 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.479921 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.479946 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.479961 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:39Z","lastTransitionTime":"2026-01-28T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.582899 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.582941 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.582953 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.582969 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.582979 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:39Z","lastTransitionTime":"2026-01-28T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.685805 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.685867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.685880 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.685904 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.685915 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:39Z","lastTransitionTime":"2026-01-28T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.788876 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.788955 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.788970 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.788993 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.789006 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:39Z","lastTransitionTime":"2026-01-28T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.880497 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:31:54.828000041 +0000 UTC Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.890767 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:39 crc kubenswrapper[4893]: E0128 15:02:39.890923 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.892550 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.892580 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.892596 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.892614 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.892624 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:39Z","lastTransitionTime":"2026-01-28T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.996072 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.996129 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.996148 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.996175 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:39 crc kubenswrapper[4893]: I0128 15:02:39.996196 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:39Z","lastTransitionTime":"2026-01-28T15:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.099431 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.099503 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.099518 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.099544 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.099557 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:40Z","lastTransitionTime":"2026-01-28T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.202787 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.202841 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.202854 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.202873 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.202890 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:40Z","lastTransitionTime":"2026-01-28T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.305438 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.305980 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.306097 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.306129 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.306148 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:40Z","lastTransitionTime":"2026-01-28T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.408460 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.408912 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.409005 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.409142 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.409248 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:40Z","lastTransitionTime":"2026-01-28T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.513881 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.513939 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.513954 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.513977 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.513992 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:40Z","lastTransitionTime":"2026-01-28T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.616934 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.617003 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.617022 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.617049 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.617067 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:40Z","lastTransitionTime":"2026-01-28T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.719723 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.719786 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.719799 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.719820 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.719830 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:40Z","lastTransitionTime":"2026-01-28T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.822116 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.822414 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.822503 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.822576 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.822642 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:40Z","lastTransitionTime":"2026-01-28T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.881584 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:56:31.449839146 +0000 UTC Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.891500 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.891500 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:40 crc kubenswrapper[4893]: E0128 15:02:40.891722 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.891917 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:40 crc kubenswrapper[4893]: E0128 15:02:40.892101 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:40 crc kubenswrapper[4893]: E0128 15:02:40.892253 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.924719 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.924897 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.924967 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.925052 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:40 crc kubenswrapper[4893]: I0128 15:02:40.925130 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:40Z","lastTransitionTime":"2026-01-28T15:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.027894 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.027969 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.028001 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.028036 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.028058 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:41Z","lastTransitionTime":"2026-01-28T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.132559 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.132631 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.132647 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.132670 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.132688 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:41Z","lastTransitionTime":"2026-01-28T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.235400 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.235448 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.235463 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.235514 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.235532 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:41Z","lastTransitionTime":"2026-01-28T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.338843 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.338899 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.338918 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.338943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.338960 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:41Z","lastTransitionTime":"2026-01-28T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.441341 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.441704 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.441781 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.441849 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.441955 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:41Z","lastTransitionTime":"2026-01-28T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.544687 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.544756 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.544774 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.544801 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.544823 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:41Z","lastTransitionTime":"2026-01-28T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.647517 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.647552 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.647563 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.647582 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.647591 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:41Z","lastTransitionTime":"2026-01-28T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.750882 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.750932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.750943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.750962 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.750972 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:41Z","lastTransitionTime":"2026-01-28T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.853607 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.854034 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.854204 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.854296 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.854361 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:41Z","lastTransitionTime":"2026-01-28T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.882016 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:54:18.058847329 +0000 UTC Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.891424 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:41 crc kubenswrapper[4893]: E0128 15:02:41.892065 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.892898 4893 scope.go:117] "RemoveContainer" containerID="89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.958334 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.958396 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.958409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.958429 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:41 crc kubenswrapper[4893]: I0128 15:02:41.958443 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:41Z","lastTransitionTime":"2026-01-28T15:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.061088 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.061425 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.061519 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.061594 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.061653 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:42Z","lastTransitionTime":"2026-01-28T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.164936 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.164987 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.165000 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.165020 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.165031 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:42Z","lastTransitionTime":"2026-01-28T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.268893 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.269054 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.269149 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.269247 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.269723 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:42Z","lastTransitionTime":"2026-01-28T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.373746 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.373775 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.373784 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.373800 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.373810 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:42Z","lastTransitionTime":"2026-01-28T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.420002 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/2.log" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.423096 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerStarted","Data":"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0"} Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.423604 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.439096 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.452449 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.467470 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.477087 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.477170 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.477186 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.477252 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.477266 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:42Z","lastTransitionTime":"2026-01-28T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.482909 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.495514 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.507250 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.520825 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c04d370bb1ae00208a3c91c24a8d3bf60236155408ef3b6b3224e20ffdd5d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:30Z\\\",\\\"message\\\":\\\"2026-01-28T15:01:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864\\\\n2026-01-28T15:01:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864 to /host/opt/cni/bin/\\\\n2026-01-28T15:01:45Z [verbose] multus-daemon started\\\\n2026-01-28T15:01:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:02:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.535920 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.550444 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.564882 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.581880 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.581920 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.581932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.581951 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.581963 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:42Z","lastTransitionTime":"2026-01-28T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.594714 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.625514 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:12Z\\\",\\\"message\\\":\\\"05 6560 factory.go:656] Stopping watch factory\\\\nI0128 15:02:12.779031 6560 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:02:12.779117 6560 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779169 6560 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779212 6560 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779249 6560 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779547 6560 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.783327 6560 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 15:02:12.783354 6560 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 15:02:12.783409 6560 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:02:12.783440 6560 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 15:02:12.783536 6560 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:02:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.652135 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.665673 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.683741 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.685552 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.685603 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.685618 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.685642 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.685657 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:42Z","lastTransitionTime":"2026-01-28T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.703886 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.750054 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.787654 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.787703 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.787716 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.787737 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.787747 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:42Z","lastTransitionTime":"2026-01-28T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.882977 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:55:03.351075239 +0000 UTC Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.890036 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.890081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.890091 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.890109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.890118 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:42Z","lastTransitionTime":"2026-01-28T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.891129 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.891267 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:42 crc kubenswrapper[4893]: E0128 15:02:42.891356 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.891376 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:42 crc kubenswrapper[4893]: E0128 15:02:42.891495 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:42 crc kubenswrapper[4893]: E0128 15:02:42.891643 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.915213 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.929832 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.944755 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.958027 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.973429 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c04d370bb1ae00208a3c91c24a8d3bf60236155408ef3b6b3224e20ffdd5d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:30Z\\\",\\\"message\\\":\\\"2026-01-28T15:01:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864\\\\n2026-01-28T15:01:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864 to /host/opt/cni/bin/\\\\n2026-01-28T15:01:45Z [verbose] multus-daemon started\\\\n2026-01-28T15:01:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:02:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.992095 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.992165 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.992180 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.992198 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.992230 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:42Z","lastTransitionTime":"2026-01-28T15:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:42 crc kubenswrapper[4893]: I0128 15:02:42.997458 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:42Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.015082 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.028416 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.048935 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.073256 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:12Z\\\",\\\"message\\\":\\\"05 6560 factory.go:656] Stopping watch factory\\\\nI0128 15:02:12.779031 6560 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:02:12.779117 6560 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779169 6560 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779212 6560 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779249 6560 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779547 6560 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.783327 6560 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 15:02:12.783354 6560 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 15:02:12.783409 6560 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:02:12.783440 6560 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 15:02:12.783536 6560 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:02:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.086500 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.095146 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.095200 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.095215 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.095235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.095269 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:43Z","lastTransitionTime":"2026-01-28T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.101828 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.116142 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.142991 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.155981 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.166934 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.176767 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.197813 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.197891 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.197906 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.197924 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.197962 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:43Z","lastTransitionTime":"2026-01-28T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.299920 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.300222 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.300235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.300252 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.300262 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:43Z","lastTransitionTime":"2026-01-28T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.403290 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.403583 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.403621 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.403832 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.403844 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:43Z","lastTransitionTime":"2026-01-28T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.428918 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/3.log" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.429853 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/2.log" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.432534 4893 generic.go:334] "Generic (PLEG): container finished" podID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerID="2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0" exitCode=1 Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.432581 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerDied","Data":"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0"} Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.432636 4893 scope.go:117] "RemoveContainer" containerID="89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.433534 4893 scope.go:117] "RemoveContainer" containerID="2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0" Jan 28 15:02:43 crc kubenswrapper[4893]: E0128 15:02:43.433728 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.450048 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.462793 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.478599 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.492913 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.506555 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.506615 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.506630 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.506646 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.506659 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:43Z","lastTransitionTime":"2026-01-28T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.507987 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.522733 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c04d370bb1ae00208a3c91c24a8d3bf60236155408ef3b6b3224e20ffdd5d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:30Z\\\",\\\"message\\\":\\\"2026-01-28T15:01:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864\\\\n2026-01-28T15:01:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864 to /host/opt/cni/bin/\\\\n2026-01-28T15:01:45Z [verbose] multus-daemon started\\\\n2026-01-28T15:01:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:02:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.535563 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.547265 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.558941 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.579591 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.603608 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f2e87210cb1a34cf493a3e3bc1dcfd791f4e4e524fcaf96bf8482bd5d3dd3b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:12Z\\\",\\\"message\\\":\\\"05 6560 factory.go:656] Stopping watch factory\\\\nI0128 15:02:12.779031 6560 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:02:12.779117 6560 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779169 6560 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779212 6560 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779249 6560 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.779547 6560 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:02:12.783327 6560 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0128 15:02:12.783354 6560 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0128 15:02:12.783409 6560 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:02:12.783440 6560 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 15:02:12.783536 6560 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:02:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:43Z\\\",\\\"message\\\":\\\"achine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 15:02:43.114554 6955 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:02:43.1145\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.609903 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.610035 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.610052 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.610080 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.610146 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:43Z","lastTransitionTime":"2026-01-28T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.617854 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.635362 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.652255 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.666027 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.680089 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.696400 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.712603 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.712710 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.712722 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.712775 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.712789 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:43Z","lastTransitionTime":"2026-01-28T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.815691 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.815736 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.815747 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.815763 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.815773 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:43Z","lastTransitionTime":"2026-01-28T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.883518 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:26:06.005329526 +0000 UTC Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.891646 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:43 crc kubenswrapper[4893]: E0128 15:02:43.891797 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.917825 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.917867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.917877 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.917895 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:43 crc kubenswrapper[4893]: I0128 15:02:43.917905 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:43Z","lastTransitionTime":"2026-01-28T15:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.020511 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.020590 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.020607 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.020628 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.020639 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:44Z","lastTransitionTime":"2026-01-28T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.123650 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.123708 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.123719 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.123739 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.123751 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:44Z","lastTransitionTime":"2026-01-28T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.200151 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.200208 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.200219 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.200235 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.200247 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:44Z","lastTransitionTime":"2026-01-28T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:44 crc kubenswrapper[4893]: E0128 15:02:44.212397 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.215952 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.215995 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.216008 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.216026 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.216037 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:44Z","lastTransitionTime":"2026-01-28T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:44 crc kubenswrapper[4893]: E0128 15:02:44.234656 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.239556 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.239596 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.239609 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.239626 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.239638 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:44Z","lastTransitionTime":"2026-01-28T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:44 crc kubenswrapper[4893]: E0128 15:02:44.256058 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.260990 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.261048 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.261060 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.261081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.261095 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:44Z","lastTransitionTime":"2026-01-28T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:44 crc kubenswrapper[4893]: E0128 15:02:44.274513 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.279820 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.279891 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.279911 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.279936 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.279948 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:44Z","lastTransitionTime":"2026-01-28T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:44 crc kubenswrapper[4893]: E0128 15:02:44.291425 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: E0128 15:02:44.291563 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.294125 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.294216 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.294236 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.294295 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.294315 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:44Z","lastTransitionTime":"2026-01-28T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.398141 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.398209 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.398232 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.398255 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.398272 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:44Z","lastTransitionTime":"2026-01-28T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.440509 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/3.log" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.445694 4893 scope.go:117] "RemoveContainer" containerID="2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0" Jan 28 15:02:44 crc kubenswrapper[4893]: E0128 15:02:44.445850 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.466390 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.480861 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.494124 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.500883 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.500938 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.500955 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.500981 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.501000 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:44Z","lastTransitionTime":"2026-01-28T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.511028 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.531073 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.547730 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.564961 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.582496 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.598780 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c04d370bb1ae00208a3c91c24a8d3bf60236155408ef3b6b3224e20ffdd5d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:30Z\\\",\\\"message\\\":\\\"2026-01-28T15:01:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864\\\\n2026-01-28T15:01:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864 to /host/opt/cni/bin/\\\\n2026-01-28T15:01:45Z [verbose] multus-daemon started\\\\n2026-01-28T15:01:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:02:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.604826 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.604868 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.604881 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.604903 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.604920 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:44Z","lastTransitionTime":"2026-01-28T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.622050 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.639102 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.654387 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.671081 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.704562 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:43Z\\\",\\\"message\\\":\\\"achine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 15:02:43.114554 6955 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:02:43.1145\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:02:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.707249 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.707296 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.707315 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.707339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.707356 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:44Z","lastTransitionTime":"2026-01-28T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.721297 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.735549 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.756110 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.810522 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.810576 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.810593 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.810615 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.810633 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:44Z","lastTransitionTime":"2026-01-28T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.883810 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:03:08.286755226 +0000 UTC Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.891312 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.891311 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:44 crc kubenswrapper[4893]: E0128 15:02:44.891507 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.891568 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:44 crc kubenswrapper[4893]: E0128 15:02:44.891718 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:44 crc kubenswrapper[4893]: E0128 15:02:44.891799 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.912673 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.912733 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.912746 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.912766 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:44 crc kubenswrapper[4893]: I0128 15:02:44.912776 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:44Z","lastTransitionTime":"2026-01-28T15:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.016407 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.016455 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.016467 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.016510 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.016521 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:45Z","lastTransitionTime":"2026-01-28T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.119234 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.119300 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.119320 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.119349 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.119370 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:45Z","lastTransitionTime":"2026-01-28T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.222719 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.222779 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.222799 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.222826 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.222847 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:45Z","lastTransitionTime":"2026-01-28T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.325491 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.325552 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.325562 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.325580 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.325591 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:45Z","lastTransitionTime":"2026-01-28T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.428873 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.428970 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.428987 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.429013 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.429028 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:45Z","lastTransitionTime":"2026-01-28T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.531645 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.531710 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.531720 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.531737 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.531747 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:45Z","lastTransitionTime":"2026-01-28T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.634499 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.634552 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.634564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.634586 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.634611 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:45Z","lastTransitionTime":"2026-01-28T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.737120 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.737177 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.737186 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.737204 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.737216 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:45Z","lastTransitionTime":"2026-01-28T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.840275 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.840325 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.840338 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.840356 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.840388 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:45Z","lastTransitionTime":"2026-01-28T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.884335 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 12:57:15.735444285 +0000 UTC Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.891487 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:45 crc kubenswrapper[4893]: E0128 15:02:45.891678 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.943341 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.943400 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.943414 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.943432 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:45 crc kubenswrapper[4893]: I0128 15:02:45.943444 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:45Z","lastTransitionTime":"2026-01-28T15:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.046902 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.046964 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.046977 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.046994 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.047005 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:46Z","lastTransitionTime":"2026-01-28T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.149566 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.149603 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.149614 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.149628 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.149638 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:46Z","lastTransitionTime":"2026-01-28T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.252281 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.252341 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.252352 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.252368 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.252378 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:46Z","lastTransitionTime":"2026-01-28T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.355109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.355166 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.355183 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.355211 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.355228 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:46Z","lastTransitionTime":"2026-01-28T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.457984 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.458028 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.458038 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.458081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.458090 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:46Z","lastTransitionTime":"2026-01-28T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.560293 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.560335 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.560348 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.560368 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.560379 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:46Z","lastTransitionTime":"2026-01-28T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.662928 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.662981 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.662994 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.663011 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.663025 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:46Z","lastTransitionTime":"2026-01-28T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.765368 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.765401 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.765409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.765422 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.765432 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:46Z","lastTransitionTime":"2026-01-28T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.768925 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.768971 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.768992 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.769022 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.769130 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.769130 4893 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.769143 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.769141 4893 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.769155 4893 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.769258 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.769292 4893 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.769304 4893 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.769180 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.769166447 +0000 UTC m=+148.542781475 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.769387 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.769366392 +0000 UTC m=+148.542981420 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.769400 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.769393503 +0000 UTC m=+148.543008531 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.769411 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.769405913 +0000 UTC m=+148.543020931 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.868024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.868055 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.868067 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.868087 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.868101 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:46Z","lastTransitionTime":"2026-01-28T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.869631 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.869785 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.869758832 +0000 UTC m=+148.643373860 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.884952 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 16:27:26.182055079 +0000 UTC Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.891445 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.891588 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.891647 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.891705 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.891792 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:46 crc kubenswrapper[4893]: E0128 15:02:46.891897 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.970599 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.970971 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.970985 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.971000 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:46 crc kubenswrapper[4893]: I0128 15:02:46.971012 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:46Z","lastTransitionTime":"2026-01-28T15:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.073567 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.073606 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.073620 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.073638 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.073650 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:47Z","lastTransitionTime":"2026-01-28T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.177791 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.177867 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.177889 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.177921 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.177942 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:47Z","lastTransitionTime":"2026-01-28T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.281596 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.281711 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.281734 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.281766 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.281787 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:47Z","lastTransitionTime":"2026-01-28T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.385292 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.385379 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.385402 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.385441 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.385532 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:47Z","lastTransitionTime":"2026-01-28T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.489045 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.489100 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.489116 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.489136 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.489149 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:47Z","lastTransitionTime":"2026-01-28T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.592354 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.592433 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.592446 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.592463 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.592536 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:47Z","lastTransitionTime":"2026-01-28T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.695361 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.695418 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.695435 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.695459 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.695501 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:47Z","lastTransitionTime":"2026-01-28T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.798277 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.798324 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.798337 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.798353 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.798362 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:47Z","lastTransitionTime":"2026-01-28T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.885800 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:14:45.781716829 +0000 UTC Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.891087 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:47 crc kubenswrapper[4893]: E0128 15:02:47.891241 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.901583 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.901647 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.901668 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.901704 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:47 crc kubenswrapper[4893]: I0128 15:02:47.901726 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:47Z","lastTransitionTime":"2026-01-28T15:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.004211 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.004274 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.004290 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.004310 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.004326 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:48Z","lastTransitionTime":"2026-01-28T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.108144 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.108196 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.108208 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.108228 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.108241 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:48Z","lastTransitionTime":"2026-01-28T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.212151 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.212252 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.212311 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.212339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.212395 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:48Z","lastTransitionTime":"2026-01-28T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.316622 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.316677 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.316690 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.316708 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.316719 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:48Z","lastTransitionTime":"2026-01-28T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.420558 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.420613 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.420626 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.420651 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.420663 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:48Z","lastTransitionTime":"2026-01-28T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.523649 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.523926 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.523995 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.524071 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.524139 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:48Z","lastTransitionTime":"2026-01-28T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.626904 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.626955 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.626967 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.626987 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.627001 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:48Z","lastTransitionTime":"2026-01-28T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.729833 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.729869 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.729882 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.729902 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.729916 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:48Z","lastTransitionTime":"2026-01-28T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.832999 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.833055 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.833068 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.833088 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.833101 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:48Z","lastTransitionTime":"2026-01-28T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.886956 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 12:06:43.603399016 +0000 UTC Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.891360 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.891393 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.891454 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:48 crc kubenswrapper[4893]: E0128 15:02:48.891618 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:48 crc kubenswrapper[4893]: E0128 15:02:48.891747 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:48 crc kubenswrapper[4893]: E0128 15:02:48.891799 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.936044 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.936092 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.936105 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.936123 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:48 crc kubenswrapper[4893]: I0128 15:02:48.936135 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:48Z","lastTransitionTime":"2026-01-28T15:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.038526 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.038571 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.038586 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.038603 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.038616 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:49Z","lastTransitionTime":"2026-01-28T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.141543 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.141618 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.141646 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.141672 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.141689 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:49Z","lastTransitionTime":"2026-01-28T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.244752 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.244814 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.244827 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.244851 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.244864 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:49Z","lastTransitionTime":"2026-01-28T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.348756 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.348861 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.348889 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.348927 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.348954 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:49Z","lastTransitionTime":"2026-01-28T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.451551 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.451662 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.451690 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.451728 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.451753 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:49Z","lastTransitionTime":"2026-01-28T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.554339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.554422 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.554443 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.554502 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.554524 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:49Z","lastTransitionTime":"2026-01-28T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.658643 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.658742 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.658763 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.658828 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.658847 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:49Z","lastTransitionTime":"2026-01-28T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.762220 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.762293 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.762308 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.762330 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.762344 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:49Z","lastTransitionTime":"2026-01-28T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.865389 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.865445 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.865459 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.865495 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.865508 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:49Z","lastTransitionTime":"2026-01-28T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.887915 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:29:48.288035989 +0000 UTC Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.891348 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:49 crc kubenswrapper[4893]: E0128 15:02:49.891613 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.968274 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.968354 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.968367 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.968383 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:49 crc kubenswrapper[4893]: I0128 15:02:49.968395 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:49Z","lastTransitionTime":"2026-01-28T15:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.072326 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.072377 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.072390 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.072408 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.072420 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:50Z","lastTransitionTime":"2026-01-28T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.176056 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.176249 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.176276 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.176312 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.176336 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:50Z","lastTransitionTime":"2026-01-28T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.279885 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.279948 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.279972 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.280005 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.280033 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:50Z","lastTransitionTime":"2026-01-28T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.383334 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.383410 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.383434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.383465 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.383518 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:50Z","lastTransitionTime":"2026-01-28T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.486741 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.486801 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.486811 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.486828 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.486839 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:50Z","lastTransitionTime":"2026-01-28T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.590164 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.590228 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.590244 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.590270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.590286 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:50Z","lastTransitionTime":"2026-01-28T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.693783 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.693840 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.693853 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.693872 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.693884 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:50Z","lastTransitionTime":"2026-01-28T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.798024 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.798080 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.798091 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.798112 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.798124 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:50Z","lastTransitionTime":"2026-01-28T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.889361 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:28:51.196674272 +0000 UTC Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.891370 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:50 crc kubenswrapper[4893]: E0128 15:02:50.891627 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.891726 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.891735 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:50 crc kubenswrapper[4893]: E0128 15:02:50.892183 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:50 crc kubenswrapper[4893]: E0128 15:02:50.892357 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.900507 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.900587 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.900607 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.900631 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:50 crc kubenswrapper[4893]: I0128 15:02:50.900651 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:50Z","lastTransitionTime":"2026-01-28T15:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.004173 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.004248 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.004270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.004300 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.004319 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:51Z","lastTransitionTime":"2026-01-28T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.108220 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.108305 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.108329 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.108361 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.108385 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:51Z","lastTransitionTime":"2026-01-28T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.212013 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.212077 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.212096 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.212122 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.212141 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:51Z","lastTransitionTime":"2026-01-28T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.315657 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.315730 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.315749 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.315780 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.315801 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:51Z","lastTransitionTime":"2026-01-28T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.419831 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.419893 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.419911 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.419938 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.419956 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:51Z","lastTransitionTime":"2026-01-28T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.523120 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.523184 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.523203 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.523233 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.523252 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:51Z","lastTransitionTime":"2026-01-28T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.627732 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.627814 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.627838 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.627879 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.627921 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:51Z","lastTransitionTime":"2026-01-28T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.731865 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.731942 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.731962 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.731998 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.732019 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:51Z","lastTransitionTime":"2026-01-28T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.835876 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.835919 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.835929 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.835968 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.835978 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:51Z","lastTransitionTime":"2026-01-28T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.890135 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:41:08.688257899 +0000 UTC Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.891787 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:51 crc kubenswrapper[4893]: E0128 15:02:51.892046 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.939355 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.939409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.939420 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.939440 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:51 crc kubenswrapper[4893]: I0128 15:02:51.939468 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:51Z","lastTransitionTime":"2026-01-28T15:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.042306 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.042409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.042442 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.042539 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.042585 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:52Z","lastTransitionTime":"2026-01-28T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.146158 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.146236 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.146256 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.146285 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.146304 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:52Z","lastTransitionTime":"2026-01-28T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.249606 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.249674 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.249691 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.249718 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.249736 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:52Z","lastTransitionTime":"2026-01-28T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.353586 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.353704 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.353726 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.353754 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.353805 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:52Z","lastTransitionTime":"2026-01-28T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.458139 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.458238 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.458266 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.458302 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.458325 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:52Z","lastTransitionTime":"2026-01-28T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.563078 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.563450 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.563630 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.563748 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.563862 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:52Z","lastTransitionTime":"2026-01-28T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.668030 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.668179 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.668207 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.668297 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.668323 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:52Z","lastTransitionTime":"2026-01-28T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.771831 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.771913 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.771927 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.771949 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.771960 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:52Z","lastTransitionTime":"2026-01-28T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.875356 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.875426 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.875445 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.875488 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.875504 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:52Z","lastTransitionTime":"2026-01-28T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.891184 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.891263 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.891283 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.891178 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 15:17:00.828135019 +0000 UTC Jan 28 15:02:52 crc kubenswrapper[4893]: E0128 15:02:52.891318 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:52 crc kubenswrapper[4893]: E0128 15:02:52.891394 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:52 crc kubenswrapper[4893]: E0128 15:02:52.891470 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.914008 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.931564 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c04d370bb1ae00208a3c91c24a8d3bf60236155408ef3b6b3224e20ffdd5d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:30Z\\\",\\\"message\\\":\\\"2026-01-28T15:01:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864\\\\n2026-01-28T15:01:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864 to /host/opt/cni/bin/\\\\n2026-01-28T15:01:45Z [verbose] multus-daemon started\\\\n2026-01-28T15:01:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:02:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.949895 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.964190 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.979517 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.979982 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.980182 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.980336 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.980533 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:52Z","lastTransitionTime":"2026-01-28T15:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:52 crc kubenswrapper[4893]: I0128 15:02:52.982437 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.001136 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.019666 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.035985 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.054385 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.074273 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.086244 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.086299 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.086320 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.086340 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.086353 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:53Z","lastTransitionTime":"2026-01-28T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.092031 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.115376 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.140408 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:43Z\\\",\\\"message\\\":\\\"achine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 15:02:43.114554 6955 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:02:43.1145\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:02:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.164445 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.179917 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.189647 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.189686 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.189700 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.189718 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.189729 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:53Z","lastTransitionTime":"2026-01-28T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.202188 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.222841 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.292597 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.292688 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.292713 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.292744 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.292764 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:53Z","lastTransitionTime":"2026-01-28T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.395713 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.395991 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.396081 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.396170 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.396289 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:53Z","lastTransitionTime":"2026-01-28T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.498707 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.498806 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.498827 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.498857 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.498881 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:53Z","lastTransitionTime":"2026-01-28T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.601201 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.601252 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.601265 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.601283 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.601294 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:53Z","lastTransitionTime":"2026-01-28T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.703979 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.704025 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.704037 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.704057 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.704072 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:53Z","lastTransitionTime":"2026-01-28T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.807761 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.807815 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.807828 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.807850 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.807863 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:53Z","lastTransitionTime":"2026-01-28T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.891390 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:20:46.999390292 +0000 UTC Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.891507 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:53 crc kubenswrapper[4893]: E0128 15:02:53.891909 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.905977 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.909584 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.909631 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.909645 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.909664 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:53 crc kubenswrapper[4893]: I0128 15:02:53.909677 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:53Z","lastTransitionTime":"2026-01-28T15:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.012617 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.012694 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.012714 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.012748 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.012768 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.115273 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.115322 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.115341 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.115366 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.115382 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.218871 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.218951 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.218965 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.219035 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.219048 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.322167 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.322226 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.322238 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.322258 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.322272 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.425851 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.425917 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.425933 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.425957 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.425968 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.484668 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.484740 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.484758 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.484784 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.484803 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: E0128 15:02:54.499279 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.502955 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.502991 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.503003 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.503021 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.503034 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: E0128 15:02:54.514205 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.518162 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.518190 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.518201 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.518215 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.518225 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: E0128 15:02:54.530530 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.533943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.533979 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.533988 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.534004 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.534017 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: E0128 15:02:54.550947 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.554823 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.554862 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.554871 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.554888 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.554898 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: E0128 15:02:54.569670 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:02:54 crc kubenswrapper[4893]: E0128 15:02:54.569859 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.571594 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.571632 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.571645 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.571665 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.571680 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.673852 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.673901 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.673913 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.673935 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.673949 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.776445 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.776495 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.776507 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.776522 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.776531 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.879157 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.879211 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.879228 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.879248 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.879263 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.890872 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.890930 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.890872 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:54 crc kubenswrapper[4893]: E0128 15:02:54.891037 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:54 crc kubenswrapper[4893]: E0128 15:02:54.891180 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:54 crc kubenswrapper[4893]: E0128 15:02:54.891303 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.891803 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:00:54.914047323 +0000 UTC Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.982096 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.982145 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.982156 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.982175 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:54 crc kubenswrapper[4893]: I0128 15:02:54.982188 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:54Z","lastTransitionTime":"2026-01-28T15:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.084347 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.084414 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.084437 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.084467 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.084531 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:55Z","lastTransitionTime":"2026-01-28T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.187015 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.187149 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.187163 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.187179 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.187191 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:55Z","lastTransitionTime":"2026-01-28T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.290322 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.290379 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.290388 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.290406 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.290415 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:55Z","lastTransitionTime":"2026-01-28T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.392777 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.392844 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.392864 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.392892 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.392910 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:55Z","lastTransitionTime":"2026-01-28T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.494964 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.495015 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.495027 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.495046 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.495058 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:55Z","lastTransitionTime":"2026-01-28T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.598397 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.598534 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.598562 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.598591 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.598609 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:55Z","lastTransitionTime":"2026-01-28T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.700848 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.700923 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.700938 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.700955 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.700966 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:55Z","lastTransitionTime":"2026-01-28T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.803837 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.803877 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.803888 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.803903 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.803913 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:55Z","lastTransitionTime":"2026-01-28T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.891207 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:55 crc kubenswrapper[4893]: E0128 15:02:55.891738 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.891999 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:27:34.077268902 +0000 UTC Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.904360 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.906444 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.906501 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.906514 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.906531 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:55 crc kubenswrapper[4893]: I0128 15:02:55.906549 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:55Z","lastTransitionTime":"2026-01-28T15:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.009178 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.009272 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.009294 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.009326 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.009346 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:56Z","lastTransitionTime":"2026-01-28T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.113150 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.113202 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.113215 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.113240 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.113263 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:56Z","lastTransitionTime":"2026-01-28T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.216103 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.216146 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.216156 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.216173 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.216184 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:56Z","lastTransitionTime":"2026-01-28T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.318741 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.318773 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.318782 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.318797 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.318808 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:56Z","lastTransitionTime":"2026-01-28T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.421687 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.421724 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.421734 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.421749 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.421759 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:56Z","lastTransitionTime":"2026-01-28T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.524881 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.524935 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.524945 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.524962 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.524971 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:56Z","lastTransitionTime":"2026-01-28T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.628870 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.628907 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.628916 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.628932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.628942 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:56Z","lastTransitionTime":"2026-01-28T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.731114 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.731149 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.731159 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.731172 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.731182 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:56Z","lastTransitionTime":"2026-01-28T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.833274 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.833335 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.833347 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.833381 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.833395 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:56Z","lastTransitionTime":"2026-01-28T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.891813 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:56 crc kubenswrapper[4893]: E0128 15:02:56.892004 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.892076 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:56 crc kubenswrapper[4893]: E0128 15:02:56.892174 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.892219 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 17:55:55.443672704 +0000 UTC Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.892303 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:56 crc kubenswrapper[4893]: E0128 15:02:56.892364 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.937150 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.937219 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.937237 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.937264 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:56 crc kubenswrapper[4893]: I0128 15:02:56.937283 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:56Z","lastTransitionTime":"2026-01-28T15:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.039860 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.039913 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.039929 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.039951 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.039967 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:57Z","lastTransitionTime":"2026-01-28T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.144161 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.144239 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.144261 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.144292 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.144313 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:57Z","lastTransitionTime":"2026-01-28T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.247196 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.247268 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.247287 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.247318 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.247337 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:57Z","lastTransitionTime":"2026-01-28T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.351193 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.351276 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.351304 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.351335 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.351357 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:57Z","lastTransitionTime":"2026-01-28T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.455003 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.455114 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.455147 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.455187 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.455212 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:57Z","lastTransitionTime":"2026-01-28T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.558947 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.559040 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.559062 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.559092 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.559120 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:57Z","lastTransitionTime":"2026-01-28T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.663251 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.663317 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.663338 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.663367 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.663386 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:57Z","lastTransitionTime":"2026-01-28T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.767365 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.767432 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.767450 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.767512 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.767532 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:57Z","lastTransitionTime":"2026-01-28T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.871031 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.871101 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.871122 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.871148 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.871167 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:57Z","lastTransitionTime":"2026-01-28T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.891795 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:57 crc kubenswrapper[4893]: E0128 15:02:57.892047 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.892424 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:29:49.702682225 +0000 UTC Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.974939 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.975032 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.975053 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.975088 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:57 crc kubenswrapper[4893]: I0128 15:02:57.975113 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:57Z","lastTransitionTime":"2026-01-28T15:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.078391 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.078464 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.078550 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.078587 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.078615 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:58Z","lastTransitionTime":"2026-01-28T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.182975 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.183060 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.183080 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.183110 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.183131 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:58Z","lastTransitionTime":"2026-01-28T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.287184 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.287284 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.287312 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.287351 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.287378 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:58Z","lastTransitionTime":"2026-01-28T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.390939 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.391095 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.391122 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.391149 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.391167 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:58Z","lastTransitionTime":"2026-01-28T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.495321 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.495430 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.495458 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.495571 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.495619 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:58Z","lastTransitionTime":"2026-01-28T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.599068 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.599165 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.599190 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.599216 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.599238 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:58Z","lastTransitionTime":"2026-01-28T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.702199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.702268 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.702288 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.702319 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.702345 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:58Z","lastTransitionTime":"2026-01-28T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.806646 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.806745 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.806776 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.806807 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.806828 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:58Z","lastTransitionTime":"2026-01-28T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.891713 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.891731 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.891947 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:02:58 crc kubenswrapper[4893]: E0128 15:02:58.892002 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:02:58 crc kubenswrapper[4893]: E0128 15:02:58.892140 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:02:58 crc kubenswrapper[4893]: E0128 15:02:58.892385 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.892538 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:34:41.69182315 +0000 UTC Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.909315 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.909404 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.909435 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.909469 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:58 crc kubenswrapper[4893]: I0128 15:02:58.909545 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:58Z","lastTransitionTime":"2026-01-28T15:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.014031 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.014109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.014130 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.014161 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.014183 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:59Z","lastTransitionTime":"2026-01-28T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.118396 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.118466 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.118519 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.118550 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.118572 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:59Z","lastTransitionTime":"2026-01-28T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.225288 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.225417 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.225445 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.225521 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.225566 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:59Z","lastTransitionTime":"2026-01-28T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.333790 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.333863 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.333881 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.333910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.333931 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:59Z","lastTransitionTime":"2026-01-28T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.437760 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.437832 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.437854 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.437883 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.437903 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:59Z","lastTransitionTime":"2026-01-28T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.541005 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.541079 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.541101 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.541134 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.541157 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:59Z","lastTransitionTime":"2026-01-28T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.644326 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.644406 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.644426 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.644456 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.644504 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:59Z","lastTransitionTime":"2026-01-28T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.748843 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.748940 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.748968 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.749003 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.749028 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:59Z","lastTransitionTime":"2026-01-28T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.852725 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.852777 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.852792 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.852813 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.852827 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:59Z","lastTransitionTime":"2026-01-28T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.891738 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:02:59 crc kubenswrapper[4893]: E0128 15:02:59.892139 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.892400 4893 scope.go:117] "RemoveContainer" containerID="2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0" Jan 28 15:02:59 crc kubenswrapper[4893]: E0128 15:02:59.892589 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.892701 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:31:50.669272856 +0000 UTC Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.956781 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.956865 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.956889 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.956947 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:02:59 crc kubenswrapper[4893]: I0128 15:02:59.956966 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:02:59Z","lastTransitionTime":"2026-01-28T15:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.062239 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.062325 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.062345 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.062376 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.062403 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:00Z","lastTransitionTime":"2026-01-28T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.166605 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.166703 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.166724 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.166757 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.166781 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:00Z","lastTransitionTime":"2026-01-28T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.270984 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.271059 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.271079 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.271109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.271131 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:00Z","lastTransitionTime":"2026-01-28T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.375569 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.375633 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.375650 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.375677 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.375695 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:00Z","lastTransitionTime":"2026-01-28T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.479768 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.479850 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.479875 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.479913 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.479935 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:00Z","lastTransitionTime":"2026-01-28T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.582838 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.582910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.582937 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.582971 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.582995 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:00Z","lastTransitionTime":"2026-01-28T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.650280 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs\") pod \"network-metrics-daemon-dqjfn\" (UID: \"27c2667f-3b81-4103-b924-fd2ec1678757\") " pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:00 crc kubenswrapper[4893]: E0128 15:03:00.650651 4893 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:03:00 crc kubenswrapper[4893]: E0128 15:03:00.650846 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs podName:27c2667f-3b81-4103-b924-fd2ec1678757 nodeName:}" failed. No retries permitted until 2026-01-28 15:04:04.650807518 +0000 UTC m=+162.424422586 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs") pod "network-metrics-daemon-dqjfn" (UID: "27c2667f-3b81-4103-b924-fd2ec1678757") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.686092 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.686163 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.686186 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.686216 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.686241 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:00Z","lastTransitionTime":"2026-01-28T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.789467 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.789547 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.789562 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.789582 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.789599 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:00Z","lastTransitionTime":"2026-01-28T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.890945 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.891098 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:00 crc kubenswrapper[4893]: E0128 15:03:00.891146 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.891206 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:00 crc kubenswrapper[4893]: E0128 15:03:00.891259 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:00 crc kubenswrapper[4893]: E0128 15:03:00.891389 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.893350 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:10:30.530832011 +0000 UTC Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.893416 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.893457 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.893506 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.893532 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.893552 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:00Z","lastTransitionTime":"2026-01-28T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.996660 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.996715 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.996730 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.996749 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:00 crc kubenswrapper[4893]: I0128 15:03:00.996761 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:00Z","lastTransitionTime":"2026-01-28T15:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.099945 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.100012 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.100025 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.100046 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.100062 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:01Z","lastTransitionTime":"2026-01-28T15:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.203848 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.203920 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.203932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.203954 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.203970 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:01Z","lastTransitionTime":"2026-01-28T15:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.308418 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.308492 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.308506 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.308529 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.308543 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:01Z","lastTransitionTime":"2026-01-28T15:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.412790 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.412857 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.412879 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.412912 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.412934 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:01Z","lastTransitionTime":"2026-01-28T15:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.515034 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.515076 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.515090 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.515106 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.515122 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:01Z","lastTransitionTime":"2026-01-28T15:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.619090 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.619189 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.619209 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.619238 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.619258 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:01Z","lastTransitionTime":"2026-01-28T15:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.722426 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.722595 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.722626 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.722660 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.722684 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:01Z","lastTransitionTime":"2026-01-28T15:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.826195 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.826313 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.826337 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.826367 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.826386 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:01Z","lastTransitionTime":"2026-01-28T15:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.891512 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:01 crc kubenswrapper[4893]: E0128 15:03:01.891753 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.893495 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:59:32.008826388 +0000 UTC Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.929755 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.929819 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.929835 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.929859 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:01 crc kubenswrapper[4893]: I0128 15:03:01.929877 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:01Z","lastTransitionTime":"2026-01-28T15:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.033863 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.033910 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.033922 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.033940 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.033952 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:02Z","lastTransitionTime":"2026-01-28T15:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.137859 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.138263 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.138448 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.138642 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.138771 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:02Z","lastTransitionTime":"2026-01-28T15:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.242611 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.242680 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.242704 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.242733 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.242750 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:02Z","lastTransitionTime":"2026-01-28T15:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.346052 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.346172 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.346197 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.346227 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.346249 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:02Z","lastTransitionTime":"2026-01-28T15:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.449325 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.449382 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.449395 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.449413 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.449427 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:02Z","lastTransitionTime":"2026-01-28T15:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.553031 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.553134 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.553166 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.553198 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.553219 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:02Z","lastTransitionTime":"2026-01-28T15:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.657265 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.657355 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.657380 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.657413 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.657435 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:02Z","lastTransitionTime":"2026-01-28T15:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.760429 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.760532 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.760546 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.760569 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.760581 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:02Z","lastTransitionTime":"2026-01-28T15:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.863109 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.863194 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.863209 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.863237 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.863253 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:02Z","lastTransitionTime":"2026-01-28T15:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.891573 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.891831 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.891910 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:02 crc kubenswrapper[4893]: E0128 15:03:02.892089 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:02 crc kubenswrapper[4893]: E0128 15:03:02.892248 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:02 crc kubenswrapper[4893]: E0128 15:03:02.892359 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.893671 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:43:57.892893491 +0000 UTC Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.905670 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acd01816-5ce0-42cc-9f29-ee0b5038c292\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5af5caf464fa918b73aae723df4c986b4de947d1e9dca38c3363c88b0aeab84a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb64bd87594c2eeafd35a5ef9af465828f0a815f129f0d2d5e5d70eb59a0123b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb64bd87594c2eeafd35a5ef9af465828f0a815f129f0d2d5e5d70eb59a0123b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.930561 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64d35c5d-39eb-4a2d-8e2a-ca7f5883644b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21285509dbb2f59833f11569ed63e61412060a1abff27f0650603553139d4b9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12a1a9d9b9431411c444116a3767697389bb07a7c7a6029f5f2da7845820e01a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a405256fbb54ea23bc63301a6ec47d4c95b136f4539ec75e6b3dc63d3b816885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5b098072ae8786aec1e9b634d91d42b268bcf0d4088469bf67624ca6303e65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5193a27a1241562d752724880c56d180111ec04ea86bfe3c319e885dfd263273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4b6324f5deb306054f5d11767e02171b3c93963af9b99e8e12aca0fe8e5b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4b6324f5deb306054f5d11767e02171b3c93963af9b99e8e12aca0fe8e5b1d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76438a49eeb93884dfb50594be34601b0a2f215d7cf6a4357b42dd76517bf599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76438a49eeb93884dfb50594be34601b0a2f215d7cf6a4357b42dd76517bf599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ed28ff4d8747036a3cbd04976c94b09a39e6de26a1ec019ac1c117e11144d275\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed28ff4d8747036a3cbd04976c94b09a39e6de26a1ec019ac1c117e11144d275\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.944274 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a2c5a50030e2426ca7bb5656114736b9d64e5aa32a5d641ae178f317ecd25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.959642 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r6mxl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5147fe08-c025-48e8-a623-263b1452e810\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb738c5d1dec4ef4443cff8004d03cb4fc726da186267f178902778c0d647b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q4ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r6mxl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.966021 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.966079 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.966095 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.966119 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.966137 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:02Z","lastTransitionTime":"2026-01-28T15:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.978421 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"673baa26-aa9b-4740-b00a-27d20d947fc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0855aa9e95ef9375ded8247008dd1b259573f1b89b2aacb460cb2de280d4289\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fc6009434717a8377a5cc6a836235c61de79ea9f85581a22d8c2229dde2c4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c956w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9hnxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:02 crc kubenswrapper[4893]: I0128 15:03:02.996412 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ddd967-f9a8-464a-95de-512c9c5874fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0e422c3928f128e4248f840285af5367e8d81600aebebaefdc456e95b56f268\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjvvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l2nht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:02Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.012265 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-krkz9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a51e5a50-969c-4f25-a895-ebb119642512\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c04d370bb1ae00208a3c91c24a8d3bf60236155408ef3b6b3224e20ffdd5d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:30Z\\\",\\\"message\\\":\\\"2026-01-28T15:01:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864\\\\n2026-01-28T15:01:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2d0ac4ac-2bdb-4cf2-a16a-53924abd5864 to /host/opt/cni/bin/\\\\n2026-01-28T15:01:45Z [verbose] multus-daemon started\\\\n2026-01-28T15:01:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:02:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8mtt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-krkz9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.033105 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c352876-4732-4d74-9b55-1e6b94b9df0b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2bc69c9f9f96434c77986da933e4b0abdf3929e9465f09377e7c1e2add631b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ea532d691b7873cf2a81f3810a4398f913320850847cf264c56dc77e6a1e540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c1f6898c57b09b2eb57a29087acfc87191646ec15aa44307fc871c59632a74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.050589 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6908a5e2-782f-4cc6-af34-a62a546a9dcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8aebd6e0a3c524d6d950232c7b12bb477ac32d9334863f94e6b3fa094689076\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a64db1e71a93568b0df4fd7ba25703120bf42dad2dac4a67a4c59481fea8e45c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef7d43d04a0e8db178dabe494cea96d5bce395b0323225f7aefeab20beb8376d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9ce2deb8952a82697e42e94f03a45cf2e37a5e898d6f7cc04c7488cae77045b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.068727 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fce95a28-d92e-420e-b16d-f90868902d76\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"le observer\\\\nW0128 15:01:42.471844 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0128 15:01:42.471953 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:01:42.472588 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1589425802/tls.crt::/tmp/serving-cert-1589425802/tls.key\\\\\\\"\\\\nI0128 15:01:42.739846 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 15:01:42.748057 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 15:01:42.748090 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 15:01:42.748122 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 15:01:42.748129 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 15:01:42.755431 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 15:01:42.755461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755468 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:01:42.755519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:01:42.755525 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:01:42.755540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:01:42.755548 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:01:42.755774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 15:01:42.756760 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:37Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:02:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.069555 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.069588 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.069597 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.069612 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.069623 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:03Z","lastTransitionTime":"2026-01-28T15:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.085749 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.099830 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.118193 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e13a9510f1fc963970a42854f9054ba8b67685179c901d59e2d7537a0d62cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47ccdcba7c2a8ce82c9d3696ba5f95f6f16564ff4dcbe3abd18d2d48b780af14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.133414 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.151085 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea2b6d0fcfae1c4bd802b61e6fdd78983e4c3949b98a4094e44edc8d1ca98ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.169256 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hn5qq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"001ac9ae-35b3-4f82-abaf-1eb6088441e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92fb7d2fd64013dd0d05346a2419c613c95c92a5bcc929f7bb40e6d5f68257bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5fpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hn5qq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.173047 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.173086 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.173099 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.173117 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.173130 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:03Z","lastTransitionTime":"2026-01-28T15:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.191649 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-h786s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac863e9c-63ed-4c56-8687-839ba5845dff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9c9272b926ad1cff2b3a767ab096878322acf122675f109af457a79bd6b32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ee34f46f59681733c1f7a7777205c3bd938ebdf729bc9ec2a3cc6a3638f446f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d01f2736ce8389a579babfdef62157769a3c45a228b1eccf30949c56793cc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c9c7644b07f3ffd80833f74c847194117eead1a8daed61e20be7fb3f742adc8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5d46bade3e92b850e8f4270ad7beb74c0997fa2d8c361143132eb523c39f9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5267412bcda9ff603f0b8e71d9534e47237fc5f99ae30cf1b9c1983d18a5459e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35cd59e35604a0569ba8cdcf393a1387cf7a9d234cdc6ef61a8449f0ceaed0ba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pjrm9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-h786s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.216466 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"135b9f51-26ac-44c4-a817-cbfa4b36ae54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:43Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:02:43Z\\\",\\\"message\\\":\\\"achine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0128 15:02:43.114554 6955 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:02:43Z is after 2025-08-24T17:21:41Z]\\\\nI0128 15:02:43.1145\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:02:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:01:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:01:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:01:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwtf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5q54w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.232018 4893 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27c2667f-3b81-4103-b924-fd2ec1678757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:01:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c28r2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:01:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dqjfn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:03Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.277691 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.277785 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.277797 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.277818 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.277833 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:03Z","lastTransitionTime":"2026-01-28T15:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.381339 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.381401 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.381427 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.381460 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.381517 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:03Z","lastTransitionTime":"2026-01-28T15:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.484622 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.484669 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.484679 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.484696 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.484707 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:03Z","lastTransitionTime":"2026-01-28T15:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.588660 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.588755 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.588777 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.588810 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.588831 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:03Z","lastTransitionTime":"2026-01-28T15:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.691826 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.691881 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.691899 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.691926 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.691945 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:03Z","lastTransitionTime":"2026-01-28T15:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.795310 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.795442 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.795468 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.795534 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.795563 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:03Z","lastTransitionTime":"2026-01-28T15:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.891033 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:03 crc kubenswrapper[4893]: E0128 15:03:03.891258 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.894063 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 04:29:58.143166854 +0000 UTC Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.899211 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.899292 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.899311 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.899347 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:03 crc kubenswrapper[4893]: I0128 15:03:03.899368 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:03Z","lastTransitionTime":"2026-01-28T15:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.003097 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.003177 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.003199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.003661 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.003884 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:04Z","lastTransitionTime":"2026-01-28T15:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.107723 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.107786 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.107814 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.107845 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.107865 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:04Z","lastTransitionTime":"2026-01-28T15:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.211221 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.211297 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.211317 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.211347 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.211367 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:04Z","lastTransitionTime":"2026-01-28T15:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.314591 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.314672 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.314697 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.314728 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.314749 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:04Z","lastTransitionTime":"2026-01-28T15:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.418242 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.418299 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.418315 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.418334 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.418344 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:04Z","lastTransitionTime":"2026-01-28T15:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.521825 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.521907 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.521933 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.521966 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.521988 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:04Z","lastTransitionTime":"2026-01-28T15:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.611532 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.611588 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.611604 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.611624 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.611637 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:04Z","lastTransitionTime":"2026-01-28T15:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:04 crc kubenswrapper[4893]: E0128 15:03:04.632069 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.636908 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.636990 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.637015 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.637053 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.637081 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:04Z","lastTransitionTime":"2026-01-28T15:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:04 crc kubenswrapper[4893]: E0128 15:03:04.657431 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.662671 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.662722 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.662739 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.662764 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.662775 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:04Z","lastTransitionTime":"2026-01-28T15:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:04 crc kubenswrapper[4893]: E0128 15:03:04.680340 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.687056 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.687105 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.687125 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.687152 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.687171 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:04Z","lastTransitionTime":"2026-01-28T15:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:04 crc kubenswrapper[4893]: E0128 15:03:04.710992 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.716456 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.716548 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.716570 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.716604 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.716626 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:04Z","lastTransitionTime":"2026-01-28T15:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:04 crc kubenswrapper[4893]: E0128 15:03:04.737250 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a030eed1-afa1-4d30-ad93-dc087f4d77df\\\",\\\"systemUUID\\\":\\\"229bc78e-0037-4fd6-b24e-ff333227d169\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:04 crc kubenswrapper[4893]: E0128 15:03:04.737608 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.740618 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.740681 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.740696 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.740716 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.740730 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:04Z","lastTransitionTime":"2026-01-28T15:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.844445 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.844541 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.844556 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.844579 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.844595 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:04Z","lastTransitionTime":"2026-01-28T15:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.891014 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.891069 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.891107 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:04 crc kubenswrapper[4893]: E0128 15:03:04.891224 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:04 crc kubenswrapper[4893]: E0128 15:03:04.891309 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:04 crc kubenswrapper[4893]: E0128 15:03:04.891581 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.894958 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:28:58.276978711 +0000 UTC Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.947311 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.947407 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.947428 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.947460 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:04 crc kubenswrapper[4893]: I0128 15:03:04.947512 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:04Z","lastTransitionTime":"2026-01-28T15:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.050159 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.050223 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.050237 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.050264 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.050279 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:05Z","lastTransitionTime":"2026-01-28T15:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.153313 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.153374 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.153392 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.153416 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.153431 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:05Z","lastTransitionTime":"2026-01-28T15:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.257731 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.257802 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.257822 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.257846 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.257863 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:05Z","lastTransitionTime":"2026-01-28T15:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.361518 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.361597 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.361682 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.361721 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.361744 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:05Z","lastTransitionTime":"2026-01-28T15:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.465600 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.465676 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.465702 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.465735 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.465756 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:05Z","lastTransitionTime":"2026-01-28T15:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.569554 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.569617 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.569636 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.569667 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.569687 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:05Z","lastTransitionTime":"2026-01-28T15:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.673859 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.673952 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.673977 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.674005 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.674028 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:05Z","lastTransitionTime":"2026-01-28T15:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.777947 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.778020 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.778032 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.778053 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.778071 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:05Z","lastTransitionTime":"2026-01-28T15:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.882090 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.882170 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.882192 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.882226 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.882252 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:05Z","lastTransitionTime":"2026-01-28T15:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.891356 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:05 crc kubenswrapper[4893]: E0128 15:03:05.891625 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.895470 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:19:59.108509974 +0000 UTC Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.986514 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.986593 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.986614 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.986643 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:05 crc kubenswrapper[4893]: I0128 15:03:05.986664 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:05Z","lastTransitionTime":"2026-01-28T15:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.089777 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.089845 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.089869 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.089903 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.089930 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:06Z","lastTransitionTime":"2026-01-28T15:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.193271 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.193348 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.193372 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.193398 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.193422 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:06Z","lastTransitionTime":"2026-01-28T15:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.296786 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.296866 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.296885 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.296917 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.296941 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:06Z","lastTransitionTime":"2026-01-28T15:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.401064 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.401147 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.401169 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.401199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.401221 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:06Z","lastTransitionTime":"2026-01-28T15:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.504858 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.504930 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.504944 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.504964 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.504978 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:06Z","lastTransitionTime":"2026-01-28T15:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.608025 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.608123 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.608140 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.608167 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.608183 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:06Z","lastTransitionTime":"2026-01-28T15:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.713053 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.713470 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.713524 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.713557 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.713588 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:06Z","lastTransitionTime":"2026-01-28T15:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.817223 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.817280 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.817294 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.817317 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.817340 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:06Z","lastTransitionTime":"2026-01-28T15:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.892172 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.892012 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.892352 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:06 crc kubenswrapper[4893]: E0128 15:03:06.892567 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:06 crc kubenswrapper[4893]: E0128 15:03:06.892597 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:06 crc kubenswrapper[4893]: E0128 15:03:06.892689 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.895609 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 23:14:13.498376209 +0000 UTC Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.920291 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.920335 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.920346 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.920363 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:06 crc kubenswrapper[4893]: I0128 15:03:06.920374 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:06Z","lastTransitionTime":"2026-01-28T15:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.023768 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.023839 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.023859 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.023886 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.023904 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:07Z","lastTransitionTime":"2026-01-28T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.126908 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.126986 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.127010 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.127043 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.127073 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:07Z","lastTransitionTime":"2026-01-28T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.230216 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.230252 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.230263 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.230279 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.230291 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:07Z","lastTransitionTime":"2026-01-28T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.333902 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.334012 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.334040 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.334067 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.334087 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:07Z","lastTransitionTime":"2026-01-28T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.438228 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.438594 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.438720 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.438800 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.438869 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:07Z","lastTransitionTime":"2026-01-28T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.542128 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.542179 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.542201 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.542223 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.542235 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:07Z","lastTransitionTime":"2026-01-28T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.645564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.645650 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.645680 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.645719 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.645746 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:07Z","lastTransitionTime":"2026-01-28T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.748309 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.748355 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.748373 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.748394 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.748411 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:07Z","lastTransitionTime":"2026-01-28T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.851890 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.851973 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.852020 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.852064 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.852089 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:07Z","lastTransitionTime":"2026-01-28T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.891095 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:07 crc kubenswrapper[4893]: E0128 15:03:07.891277 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.896377 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:27:01.612432378 +0000 UTC Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.955922 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.955990 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.956018 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.956047 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:07 crc kubenswrapper[4893]: I0128 15:03:07.956067 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:07Z","lastTransitionTime":"2026-01-28T15:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.059447 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.059564 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.059585 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.059617 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.059642 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:08Z","lastTransitionTime":"2026-01-28T15:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.162963 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.163019 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.163034 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.163053 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.163069 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:08Z","lastTransitionTime":"2026-01-28T15:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.266270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.266328 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.266342 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.266362 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.266380 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:08Z","lastTransitionTime":"2026-01-28T15:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.368716 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.368757 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.368772 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.368788 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.368798 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:08Z","lastTransitionTime":"2026-01-28T15:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.471905 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.472000 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.472021 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.472048 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.472067 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:08Z","lastTransitionTime":"2026-01-28T15:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.575907 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.576018 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.576035 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.576057 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.576071 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:08Z","lastTransitionTime":"2026-01-28T15:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.679640 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.679746 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.679776 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.679815 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.679840 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:08Z","lastTransitionTime":"2026-01-28T15:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.784210 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.784321 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.784344 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.784388 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.784415 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:08Z","lastTransitionTime":"2026-01-28T15:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.888095 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.888230 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.888258 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.888294 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.888317 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:08Z","lastTransitionTime":"2026-01-28T15:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.891692 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.891776 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:08 crc kubenswrapper[4893]: E0128 15:03:08.891953 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.891728 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:08 crc kubenswrapper[4893]: E0128 15:03:08.892205 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:08 crc kubenswrapper[4893]: E0128 15:03:08.892426 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.896599 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:56:07.880021673 +0000 UTC Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.991784 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.991883 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.991904 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.991943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:08 crc kubenswrapper[4893]: I0128 15:03:08.991964 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:08Z","lastTransitionTime":"2026-01-28T15:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.095075 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.095138 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.095158 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.095187 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.095208 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:09Z","lastTransitionTime":"2026-01-28T15:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.197980 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.198032 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.198045 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.198063 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.198077 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:09Z","lastTransitionTime":"2026-01-28T15:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.301529 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.301596 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.301614 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.301640 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.301658 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:09Z","lastTransitionTime":"2026-01-28T15:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.404746 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.404803 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.404820 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.404842 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.404858 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:09Z","lastTransitionTime":"2026-01-28T15:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.507352 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.507405 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.507419 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.507443 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.507458 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:09Z","lastTransitionTime":"2026-01-28T15:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.610408 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.610536 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.610556 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.610579 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.610619 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:09Z","lastTransitionTime":"2026-01-28T15:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.714988 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.715060 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.715080 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.715110 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.715132 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:09Z","lastTransitionTime":"2026-01-28T15:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.819853 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.819970 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.819995 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.820026 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.820046 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:09Z","lastTransitionTime":"2026-01-28T15:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.890872 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:09 crc kubenswrapper[4893]: E0128 15:03:09.891452 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.897533 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:52:22.879720093 +0000 UTC Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.927456 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.927567 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.927584 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.927609 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:09 crc kubenswrapper[4893]: I0128 15:03:09.927625 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:09Z","lastTransitionTime":"2026-01-28T15:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.031273 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.031347 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.031365 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.031391 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.031407 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:10Z","lastTransitionTime":"2026-01-28T15:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.135517 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.136010 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.136209 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.136401 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.136655 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:10Z","lastTransitionTime":"2026-01-28T15:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.240768 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.240841 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.240862 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.240895 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.240916 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:10Z","lastTransitionTime":"2026-01-28T15:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.344407 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.344556 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.344587 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.344617 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.344636 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:10Z","lastTransitionTime":"2026-01-28T15:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.448803 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.448876 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.448902 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.448940 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.448968 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:10Z","lastTransitionTime":"2026-01-28T15:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.552276 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.552351 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.552377 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.552410 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.552435 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:10Z","lastTransitionTime":"2026-01-28T15:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.656528 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.656612 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.656634 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.656669 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.656694 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:10Z","lastTransitionTime":"2026-01-28T15:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.762070 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.762147 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.762168 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.762199 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.762219 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:10Z","lastTransitionTime":"2026-01-28T15:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.864801 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.864855 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.864871 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.864892 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.864907 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:10Z","lastTransitionTime":"2026-01-28T15:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.890864 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:10 crc kubenswrapper[4893]: E0128 15:03:10.891005 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.891300 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.891434 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:10 crc kubenswrapper[4893]: E0128 15:03:10.891550 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:10 crc kubenswrapper[4893]: E0128 15:03:10.891668 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.898524 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:23:56.319545953 +0000 UTC Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.968271 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.968336 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.968352 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.968376 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:10 crc kubenswrapper[4893]: I0128 15:03:10.968391 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:10Z","lastTransitionTime":"2026-01-28T15:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.071434 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.071526 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.071544 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.071565 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.071581 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:11Z","lastTransitionTime":"2026-01-28T15:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.173877 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.173932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.173952 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.173980 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.173999 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:11Z","lastTransitionTime":"2026-01-28T15:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.276180 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.276241 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.276260 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.276284 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.276300 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:11Z","lastTransitionTime":"2026-01-28T15:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.379093 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.379142 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.379161 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.379186 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.379200 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:11Z","lastTransitionTime":"2026-01-28T15:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.483293 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.483354 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.483377 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.483406 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.483425 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:11Z","lastTransitionTime":"2026-01-28T15:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.586786 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.586848 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.586865 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.586921 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.586940 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:11Z","lastTransitionTime":"2026-01-28T15:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.689800 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.689873 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.689899 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.689925 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.689942 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:11Z","lastTransitionTime":"2026-01-28T15:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.793856 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.793915 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.793930 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.793955 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.793970 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:11Z","lastTransitionTime":"2026-01-28T15:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.891805 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:11 crc kubenswrapper[4893]: E0128 15:03:11.891980 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.896853 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.896892 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.896908 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.896926 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.896937 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:11Z","lastTransitionTime":"2026-01-28T15:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:11 crc kubenswrapper[4893]: I0128 15:03:11.899074 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 19:25:33.141253138 +0000 UTC Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.000064 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.000140 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.000159 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.000190 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.000214 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:12Z","lastTransitionTime":"2026-01-28T15:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.103607 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.103701 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.103718 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.103744 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.103770 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:12Z","lastTransitionTime":"2026-01-28T15:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.206510 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.206568 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.206586 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.206607 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.206622 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:12Z","lastTransitionTime":"2026-01-28T15:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.310147 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.310237 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.310263 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.310297 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.310325 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:12Z","lastTransitionTime":"2026-01-28T15:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.414216 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.414263 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.414275 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.414293 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.414305 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:12Z","lastTransitionTime":"2026-01-28T15:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.517830 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.517901 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.517918 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.517951 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.517971 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:12Z","lastTransitionTime":"2026-01-28T15:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.621202 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.621267 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.621280 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.621301 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.621316 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:12Z","lastTransitionTime":"2026-01-28T15:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.724716 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.724798 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.724819 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.724845 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.724859 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:12Z","lastTransitionTime":"2026-01-28T15:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.827831 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.828318 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.828433 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.828597 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.828723 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:12Z","lastTransitionTime":"2026-01-28T15:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.890886 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.891605 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.891615 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:12 crc kubenswrapper[4893]: E0128 15:03:12.892085 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:12 crc kubenswrapper[4893]: E0128 15:03:12.892239 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:12 crc kubenswrapper[4893]: E0128 15:03:12.892279 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.892540 4893 scope.go:117] "RemoveContainer" containerID="2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0" Jan 28 15:03:12 crc kubenswrapper[4893]: E0128 15:03:12.892897 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5q54w_openshift-ovn-kubernetes(135b9f51-26ac-44c4-a817-cbfa4b36ae54)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.899781 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:00:30.038965297 +0000 UTC Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.933888 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.933943 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.933958 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.933984 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.933998 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:12Z","lastTransitionTime":"2026-01-28T15:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.953750 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-r6mxl" podStartSLOduration=90.953719137 podStartE2EDuration="1m30.953719137s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:12.953677786 +0000 UTC m=+110.727292834" watchObservedRunningTime="2026-01-28 15:03:12.953719137 +0000 UTC m=+110.727334165" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.969245 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9hnxm" podStartSLOduration=89.969218901 podStartE2EDuration="1m29.969218901s" podCreationTimestamp="2026-01-28 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:12.969021036 +0000 UTC m=+110.742636074" watchObservedRunningTime="2026-01-28 15:03:12.969218901 +0000 UTC m=+110.742833939" Jan 28 15:03:12 crc kubenswrapper[4893]: I0128 15:03:12.985650 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=19.985619601 podStartE2EDuration="19.985619601s" podCreationTimestamp="2026-01-28 15:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:12.985273781 +0000 UTC m=+110.758888819" watchObservedRunningTime="2026-01-28 15:03:12.985619601 +0000 UTC m=+110.759234639" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.032270 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=18.032242188 podStartE2EDuration="18.032242188s" podCreationTimestamp="2026-01-28 15:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:13.016710172 +0000 UTC m=+110.790325230" watchObservedRunningTime="2026-01-28 15:03:13.032242188 +0000 UTC m=+110.805857226" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.033061 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podStartSLOduration=91.03305413 podStartE2EDuration="1m31.03305413s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:13.031717253 +0000 UTC m=+110.805332301" watchObservedRunningTime="2026-01-28 15:03:13.03305413 +0000 UTC m=+110.806669168" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.037799 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.037871 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.037893 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.037917 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.037931 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:13Z","lastTransitionTime":"2026-01-28T15:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.051291 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.051260729 podStartE2EDuration="1m30.051260729s" podCreationTimestamp="2026-01-28 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:13.050734734 +0000 UTC m=+110.824349792" watchObservedRunningTime="2026-01-28 15:03:13.051260729 +0000 UTC m=+110.824875767" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.116845 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-krkz9" podStartSLOduration=91.116820245 podStartE2EDuration="1m31.116820245s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:13.116399194 +0000 UTC m=+110.890014232" watchObservedRunningTime="2026-01-28 15:03:13.116820245 +0000 UTC m=+110.890435273" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.132971 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.132941507 podStartE2EDuration="1m29.132941507s" podCreationTimestamp="2026-01-28 15:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:13.13271109 +0000 UTC m=+110.906326118" watchObservedRunningTime="2026-01-28 15:03:13.132941507 +0000 UTC m=+110.906556535" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.141058 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.141105 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.141119 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.141140 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.141155 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:13Z","lastTransitionTime":"2026-01-28T15:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.159202 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=63.159171735 podStartE2EDuration="1m3.159171735s" podCreationTimestamp="2026-01-28 15:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:13.147179727 +0000 UTC m=+110.920794765" watchObservedRunningTime="2026-01-28 15:03:13.159171735 +0000 UTC m=+110.932786763" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.159400 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hn5qq" podStartSLOduration=91.159393521 podStartE2EDuration="1m31.159393521s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:13.159103194 +0000 UTC m=+110.932718232" watchObservedRunningTime="2026-01-28 15:03:13.159393521 +0000 UTC m=+110.933008549" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.179993 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-h786s" podStartSLOduration=91.179958995 podStartE2EDuration="1m31.179958995s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:13.179156813 +0000 UTC m=+110.952771851" watchObservedRunningTime="2026-01-28 15:03:13.179958995 +0000 UTC m=+110.953574023" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.246999 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.247048 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.247064 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.247085 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.247099 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:13Z","lastTransitionTime":"2026-01-28T15:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.350394 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.350454 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.350467 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.350800 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.350812 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:13Z","lastTransitionTime":"2026-01-28T15:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.454785 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.454849 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.454866 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.454894 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.454910 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:13Z","lastTransitionTime":"2026-01-28T15:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.557452 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.557508 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.557521 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.557539 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.557549 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:13Z","lastTransitionTime":"2026-01-28T15:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.661622 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.661688 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.661707 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.661733 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.661751 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:13Z","lastTransitionTime":"2026-01-28T15:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.764379 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.764457 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.764491 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.764523 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.764537 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:13Z","lastTransitionTime":"2026-01-28T15:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.867457 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.867562 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.867586 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.867620 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.867645 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:13Z","lastTransitionTime":"2026-01-28T15:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.891341 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:13 crc kubenswrapper[4893]: E0128 15:03:13.891508 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.900592 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:28:08.501722319 +0000 UTC Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.971209 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.971248 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.971258 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.971273 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:13 crc kubenswrapper[4893]: I0128 15:03:13.971287 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:13Z","lastTransitionTime":"2026-01-28T15:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.074206 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.074270 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.074283 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.074299 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.074309 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:14Z","lastTransitionTime":"2026-01-28T15:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.176592 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.176633 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.176649 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.176667 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.176677 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:14Z","lastTransitionTime":"2026-01-28T15:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.279862 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.279903 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.279913 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.279932 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.279965 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:14Z","lastTransitionTime":"2026-01-28T15:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.383289 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.383353 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.383367 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.383393 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.383410 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:14Z","lastTransitionTime":"2026-01-28T15:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.485729 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.485787 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.485804 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.485826 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.485840 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:14Z","lastTransitionTime":"2026-01-28T15:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.588521 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.588570 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.588580 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.588600 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.588613 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:14Z","lastTransitionTime":"2026-01-28T15:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.691438 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.691485 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.691495 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.691509 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.691520 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:14Z","lastTransitionTime":"2026-01-28T15:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.795038 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.795084 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.795100 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.795122 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.795133 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:14Z","lastTransitionTime":"2026-01-28T15:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.891881 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.892054 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:14 crc kubenswrapper[4893]: E0128 15:03:14.892218 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.892493 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:14 crc kubenswrapper[4893]: E0128 15:03:14.892580 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:14 crc kubenswrapper[4893]: E0128 15:03:14.892798 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.897372 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.897409 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.897421 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.897446 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.897467 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:14Z","lastTransitionTime":"2026-01-28T15:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.901601 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:35:41.395121584 +0000 UTC Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.977631 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.977710 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.977782 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.977818 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:14 crc kubenswrapper[4893]: I0128 15:03:14.977841 4893 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:14Z","lastTransitionTime":"2026-01-28T15:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.030140 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp"] Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.030745 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.034875 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.035496 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.036041 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.038258 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.119219 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.119616 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.119636 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.119657 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.119710 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.221393 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.221508 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.221534 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.221585 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.221699 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.221754 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.221862 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.224292 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.235518 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.249813 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a5c605ef-a6b6-4120-890d-61ffc9acb8f4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cq9zp\" (UID: \"a5c605ef-a6b6-4120-890d-61ffc9acb8f4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.351220 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.576333 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" event={"ID":"a5c605ef-a6b6-4120-890d-61ffc9acb8f4","Type":"ContainerStarted","Data":"4f5ae00106371883e776e563395966e3fd09d0dc55806ef2648a6fe8aa26f9d6"} Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.576406 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" event={"ID":"a5c605ef-a6b6-4120-890d-61ffc9acb8f4","Type":"ContainerStarted","Data":"9f7f6a25ae084f34091aea960c8fcf96094124a3f82fb9ba0e4903b92ff61f14"} Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.596291 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cq9zp" podStartSLOduration=93.596262366 podStartE2EDuration="1m33.596262366s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:15.595055562 +0000 UTC m=+113.368670640" watchObservedRunningTime="2026-01-28 15:03:15.596262366 +0000 UTC m=+113.369877394" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.891203 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:15 crc kubenswrapper[4893]: E0128 15:03:15.891347 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.902441 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:22:15.114721839 +0000 UTC Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.902543 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 28 15:03:15 crc kubenswrapper[4893]: I0128 15:03:15.910316 4893 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 15:03:16 crc kubenswrapper[4893]: I0128 15:03:16.892739 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:16 crc kubenswrapper[4893]: E0128 15:03:16.892880 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:16 crc kubenswrapper[4893]: I0128 15:03:16.893219 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:16 crc kubenswrapper[4893]: E0128 15:03:16.893295 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:16 crc kubenswrapper[4893]: I0128 15:03:16.893409 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:16 crc kubenswrapper[4893]: E0128 15:03:16.893493 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:17 crc kubenswrapper[4893]: I0128 15:03:17.587005 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krkz9_a51e5a50-969c-4f25-a895-ebb119642512/kube-multus/1.log" Jan 28 15:03:17 crc kubenswrapper[4893]: I0128 15:03:17.588518 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krkz9_a51e5a50-969c-4f25-a895-ebb119642512/kube-multus/0.log" Jan 28 15:03:17 crc kubenswrapper[4893]: I0128 15:03:17.588579 4893 generic.go:334] "Generic (PLEG): container finished" podID="a51e5a50-969c-4f25-a895-ebb119642512" containerID="0c04d370bb1ae00208a3c91c24a8d3bf60236155408ef3b6b3224e20ffdd5d0b" exitCode=1 Jan 28 15:03:17 crc kubenswrapper[4893]: I0128 15:03:17.588632 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krkz9" event={"ID":"a51e5a50-969c-4f25-a895-ebb119642512","Type":"ContainerDied","Data":"0c04d370bb1ae00208a3c91c24a8d3bf60236155408ef3b6b3224e20ffdd5d0b"} Jan 28 15:03:17 crc kubenswrapper[4893]: I0128 15:03:17.588688 4893 scope.go:117] "RemoveContainer" containerID="4c3913c088281703b7d0c89dc7fe9376eefa9e74c53d6e8a222c14f2b90ae6d0" Jan 28 15:03:17 crc kubenswrapper[4893]: I0128 15:03:17.589377 4893 scope.go:117] "RemoveContainer" containerID="0c04d370bb1ae00208a3c91c24a8d3bf60236155408ef3b6b3224e20ffdd5d0b" Jan 28 15:03:17 crc kubenswrapper[4893]: E0128 15:03:17.589680 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-krkz9_openshift-multus(a51e5a50-969c-4f25-a895-ebb119642512)\"" pod="openshift-multus/multus-krkz9" podUID="a51e5a50-969c-4f25-a895-ebb119642512" Jan 28 15:03:17 crc kubenswrapper[4893]: I0128 15:03:17.891211 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:17 crc kubenswrapper[4893]: E0128 15:03:17.891350 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:18 crc kubenswrapper[4893]: I0128 15:03:18.592703 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krkz9_a51e5a50-969c-4f25-a895-ebb119642512/kube-multus/1.log" Jan 28 15:03:18 crc kubenswrapper[4893]: I0128 15:03:18.891715 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:18 crc kubenswrapper[4893]: I0128 15:03:18.891747 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:18 crc kubenswrapper[4893]: E0128 15:03:18.891866 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:18 crc kubenswrapper[4893]: I0128 15:03:18.892000 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:18 crc kubenswrapper[4893]: E0128 15:03:18.892097 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:18 crc kubenswrapper[4893]: E0128 15:03:18.892290 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:19 crc kubenswrapper[4893]: I0128 15:03:19.891094 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:19 crc kubenswrapper[4893]: E0128 15:03:19.891226 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:20 crc kubenswrapper[4893]: I0128 15:03:20.891744 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:20 crc kubenswrapper[4893]: I0128 15:03:20.891869 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:20 crc kubenswrapper[4893]: I0128 15:03:20.891744 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:20 crc kubenswrapper[4893]: E0128 15:03:20.891953 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:20 crc kubenswrapper[4893]: E0128 15:03:20.892056 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:20 crc kubenswrapper[4893]: E0128 15:03:20.892238 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:21 crc kubenswrapper[4893]: I0128 15:03:21.890985 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:21 crc kubenswrapper[4893]: E0128 15:03:21.891458 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:22 crc kubenswrapper[4893]: E0128 15:03:22.839842 4893 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 28 15:03:22 crc kubenswrapper[4893]: I0128 15:03:22.891080 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:22 crc kubenswrapper[4893]: I0128 15:03:22.891195 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:22 crc kubenswrapper[4893]: E0128 15:03:22.893302 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:22 crc kubenswrapper[4893]: I0128 15:03:22.893323 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:22 crc kubenswrapper[4893]: E0128 15:03:22.893523 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:22 crc kubenswrapper[4893]: E0128 15:03:22.893648 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:23 crc kubenswrapper[4893]: E0128 15:03:23.006981 4893 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:03:23 crc kubenswrapper[4893]: I0128 15:03:23.891742 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:23 crc kubenswrapper[4893]: E0128 15:03:23.892329 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:23 crc kubenswrapper[4893]: I0128 15:03:23.892521 4893 scope.go:117] "RemoveContainer" containerID="2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0" Jan 28 15:03:24 crc kubenswrapper[4893]: I0128 15:03:24.617468 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/3.log" Jan 28 15:03:24 crc kubenswrapper[4893]: I0128 15:03:24.621000 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerStarted","Data":"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305"} Jan 28 15:03:24 crc kubenswrapper[4893]: I0128 15:03:24.621546 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:03:24 crc kubenswrapper[4893]: I0128 15:03:24.648557 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podStartSLOduration=102.648531295 podStartE2EDuration="1m42.648531295s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:24.648320879 +0000 UTC m=+122.421935917" watchObservedRunningTime="2026-01-28 15:03:24.648531295 +0000 UTC m=+122.422146313" Jan 28 15:03:24 crc kubenswrapper[4893]: I0128 15:03:24.776050 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dqjfn"] Jan 28 15:03:24 crc kubenswrapper[4893]: I0128 15:03:24.776151 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:24 crc kubenswrapper[4893]: E0128 15:03:24.776239 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:24 crc kubenswrapper[4893]: I0128 15:03:24.891563 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:24 crc kubenswrapper[4893]: I0128 15:03:24.891762 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:24 crc kubenswrapper[4893]: E0128 15:03:24.891817 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:24 crc kubenswrapper[4893]: I0128 15:03:24.891866 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:24 crc kubenswrapper[4893]: E0128 15:03:24.891994 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:24 crc kubenswrapper[4893]: E0128 15:03:24.892042 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:26 crc kubenswrapper[4893]: I0128 15:03:26.891551 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:26 crc kubenswrapper[4893]: E0128 15:03:26.891890 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:26 crc kubenswrapper[4893]: I0128 15:03:26.891709 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:26 crc kubenswrapper[4893]: E0128 15:03:26.891984 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:26 crc kubenswrapper[4893]: I0128 15:03:26.891723 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:26 crc kubenswrapper[4893]: E0128 15:03:26.892030 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:26 crc kubenswrapper[4893]: I0128 15:03:26.891680 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:26 crc kubenswrapper[4893]: E0128 15:03:26.892085 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:28 crc kubenswrapper[4893]: E0128 15:03:28.008274 4893 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:03:28 crc kubenswrapper[4893]: I0128 15:03:28.891030 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:28 crc kubenswrapper[4893]: I0128 15:03:28.891071 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:28 crc kubenswrapper[4893]: I0128 15:03:28.891073 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:28 crc kubenswrapper[4893]: I0128 15:03:28.891047 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:28 crc kubenswrapper[4893]: E0128 15:03:28.891178 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:28 crc kubenswrapper[4893]: E0128 15:03:28.891337 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:28 crc kubenswrapper[4893]: E0128 15:03:28.891360 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:28 crc kubenswrapper[4893]: E0128 15:03:28.891424 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:29 crc kubenswrapper[4893]: I0128 15:03:29.892197 4893 scope.go:117] "RemoveContainer" containerID="0c04d370bb1ae00208a3c91c24a8d3bf60236155408ef3b6b3224e20ffdd5d0b" Jan 28 15:03:30 crc kubenswrapper[4893]: I0128 15:03:30.639680 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krkz9_a51e5a50-969c-4f25-a895-ebb119642512/kube-multus/1.log" Jan 28 15:03:30 crc kubenswrapper[4893]: I0128 15:03:30.639978 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krkz9" event={"ID":"a51e5a50-969c-4f25-a895-ebb119642512","Type":"ContainerStarted","Data":"70cbfe0325abc353a7d194d727a957eea71dda00452d5a048b5b50696e54c1e4"} Jan 28 15:03:30 crc kubenswrapper[4893]: I0128 15:03:30.891238 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:30 crc kubenswrapper[4893]: I0128 15:03:30.891286 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:30 crc kubenswrapper[4893]: I0128 15:03:30.891305 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:30 crc kubenswrapper[4893]: E0128 15:03:30.891355 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:30 crc kubenswrapper[4893]: E0128 15:03:30.891465 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:30 crc kubenswrapper[4893]: E0128 15:03:30.891559 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:30 crc kubenswrapper[4893]: I0128 15:03:30.891687 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:30 crc kubenswrapper[4893]: E0128 15:03:30.891763 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:32 crc kubenswrapper[4893]: I0128 15:03:32.891730 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:32 crc kubenswrapper[4893]: I0128 15:03:32.891756 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:32 crc kubenswrapper[4893]: I0128 15:03:32.891809 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:32 crc kubenswrapper[4893]: I0128 15:03:32.891730 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:32 crc kubenswrapper[4893]: E0128 15:03:32.895324 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dqjfn" podUID="27c2667f-3b81-4103-b924-fd2ec1678757" Jan 28 15:03:32 crc kubenswrapper[4893]: E0128 15:03:32.895856 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:32 crc kubenswrapper[4893]: E0128 15:03:32.896163 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:32 crc kubenswrapper[4893]: E0128 15:03:32.896589 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:34 crc kubenswrapper[4893]: I0128 15:03:34.891251 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:34 crc kubenswrapper[4893]: I0128 15:03:34.891307 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:34 crc kubenswrapper[4893]: I0128 15:03:34.891321 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:34 crc kubenswrapper[4893]: I0128 15:03:34.891496 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:03:34 crc kubenswrapper[4893]: I0128 15:03:34.893781 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 15:03:34 crc kubenswrapper[4893]: I0128 15:03:34.897725 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 15:03:34 crc kubenswrapper[4893]: I0128 15:03:34.897787 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 15:03:34 crc kubenswrapper[4893]: I0128 15:03:34.897906 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 15:03:34 crc kubenswrapper[4893]: I0128 15:03:34.897946 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 15:03:34 crc kubenswrapper[4893]: I0128 15:03:34.897906 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.456016 4893 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.497543 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.498161 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.499446 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vd8ml"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.499980 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.508634 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.508951 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.509038 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.508649 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.508767 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.508853 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.509234 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.510668 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sfkds"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.509758 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.512216 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.517950 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.518461 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.538667 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-vzxzx"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.538863 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zgw9r"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.539093 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.539439 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8ppbb"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.539576 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.539807 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.540023 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.540427 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6q42k"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.540720 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-z2gjc"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.540998 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.541371 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.541630 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.541862 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.531923 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.519539 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.542147 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.542206 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.542251 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-z2gjc" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.542585 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.542719 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.543053 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.543128 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.532006 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.544427 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.532056 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.532095 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.532105 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.532151 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.532183 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.532235 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.549254 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.549335 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.552864 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8jcmm"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.553527 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.554378 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.555408 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.555952 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sfkds"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.555981 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.556070 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.556166 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.556186 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.556575 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.556892 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.559219 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.559474 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.559636 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.559861 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.560806 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.561768 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.561975 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.562146 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.561976 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.562295 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.563024 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.564018 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.564048 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.564276 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.565347 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.565598 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.565718 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.565830 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.565931 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.566079 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.566453 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.570827 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.570954 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.571738 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.575054 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g2dcn"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.581605 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.583371 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.585953 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.586012 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.589469 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.590925 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.607059 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.607322 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.607529 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.607669 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.607990 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.608101 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.608199 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.608362 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.608936 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.609083 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.609274 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.609423 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.609600 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.609722 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.609882 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.609952 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.610038 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.610048 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.610181 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.610288 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.610338 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.610436 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.610572 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.610690 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.610848 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.610930 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.611096 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.611194 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.611596 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.611694 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.611777 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.612823 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.613302 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.613482 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cqgww"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.617133 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.613879 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.617656 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.613956 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.617700 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cqgww" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.614494 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.617939 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.618138 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.618265 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.618325 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.618368 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.617670 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.618453 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.618487 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.618590 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.620041 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.624683 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.625762 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.625813 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.626384 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.628483 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gnmz9"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.629008 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.629399 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5cr6t"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.629760 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.630234 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5cr6t" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.630442 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n2td9"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.630957 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.631056 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.631086 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n2td9" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.631444 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.651256 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.651930 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-client-ca\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.652026 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca41e21f-75c8-48bc-8611-85bebde78fad-audit-policies\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.652089 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcr6g\" (UniqueName: \"kubernetes.io/projected/c9f781b6-b4dc-428e-a4b5-c0edca799be2-kube-api-access-gcr6g\") pod \"cluster-image-registry-operator-dc59b4c8b-lcqnw\" (UID: \"c9f781b6-b4dc-428e-a4b5-c0edca799be2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.652148 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwnb8\" (UniqueName: \"kubernetes.io/projected/81d03df1-14b4-4475-944e-bf81e7abca38-kube-api-access-wwnb8\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.652301 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.652343 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81d03df1-14b4-4475-944e-bf81e7abca38-serving-cert\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.652271 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.652393 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.652489 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-service-ca\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.652694 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cc9b874e-9d92-4b60-affa-24d0f2286cb8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6wtc2\" (UID: \"cc9b874e-9d92-4b60-affa-24d0f2286cb8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.652738 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvdlk\" (UniqueName: \"kubernetes.io/projected/285eb7ab-eacb-482f-bafb-45871026d2b1-kube-api-access-lvdlk\") pod \"openshift-controller-manager-operator-756b6f6bc6-qvscx\" (UID: \"285eb7ab-eacb-482f-bafb-45871026d2b1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.653057 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fbb0ada-30b2-4b03-bb9a-456f07e78a42-config\") pod \"console-operator-58897d9998-8jcmm\" (UID: \"7fbb0ada-30b2-4b03-bb9a-456f07e78a42\") " pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.653209 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-config\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.653355 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klw2s\" (UniqueName: \"kubernetes.io/projected/e1399bb5-4202-4d0e-aac3-83bec9d52d2d-kube-api-access-klw2s\") pod \"downloads-7954f5f757-z2gjc\" (UID: \"e1399bb5-4202-4d0e-aac3-83bec9d52d2d\") " pod="openshift-console/downloads-7954f5f757-z2gjc" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.653391 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64a0f7cc-6a3a-4604-a964-6fbd123e4d24-proxy-tls\") pod \"machine-config-controller-84d6567774-kvxbc\" (UID: \"64a0f7cc-6a3a-4604-a964-6fbd123e4d24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.653542 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttfxr\" (UniqueName: \"kubernetes.io/projected/5ea57229-2fa9-47b3-a2f1-6c28d9434923-kube-api-access-ttfxr\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.654006 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca41e21f-75c8-48bc-8611-85bebde78fad-serving-cert\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.654170 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca41e21f-75c8-48bc-8611-85bebde78fad-audit-dir\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.654307 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-trusted-ca-bundle\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.654452 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz86w\" (UniqueName: \"kubernetes.io/projected/d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d-kube-api-access-qz86w\") pod \"cluster-samples-operator-665b6dd947-nhwwk\" (UID: \"d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.655700 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fbb0ada-30b2-4b03-bb9a-456f07e78a42-serving-cert\") pod \"console-operator-58897d9998-8jcmm\" (UID: \"7fbb0ada-30b2-4b03-bb9a-456f07e78a42\") " pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.655772 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-audit\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.655831 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-oauth-serving-cert\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.655869 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x7w9\" (UniqueName: \"kubernetes.io/projected/cc9b874e-9d92-4b60-affa-24d0f2286cb8-kube-api-access-4x7w9\") pod \"openshift-config-operator-7777fb866f-6wtc2\" (UID: \"cc9b874e-9d92-4b60-affa-24d0f2286cb8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.655950 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.655986 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.656029 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7r8x\" (UniqueName: \"kubernetes.io/projected/feaf053e-d992-479b-b7ac-f7383e0b4b35-kube-api-access-k7r8x\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.656550 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d03df1-14b4-4475-944e-bf81e7abca38-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.656597 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a430c60-e09a-473a-8938-c6e67c6fe89f-tmpfs\") pod \"packageserver-d55dfcdfc-2zfnn\" (UID: \"3a430c60-e09a-473a-8938-c6e67c6fe89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.656637 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64a0f7cc-6a3a-4604-a964-6fbd123e4d24-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kvxbc\" (UID: \"64a0f7cc-6a3a-4604-a964-6fbd123e4d24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.656806 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f781b6-b4dc-428e-a4b5-c0edca799be2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lcqnw\" (UID: \"c9f781b6-b4dc-428e-a4b5-c0edca799be2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.656838 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nhwwk\" (UID: \"d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.656868 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.657413 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqq8s\" (UniqueName: \"kubernetes.io/projected/ca41e21f-75c8-48bc-8611-85bebde78fad-kube-api-access-fqq8s\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.657459 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ef3c4a5f-725d-4be0-b800-ab95fba9e33e-images\") pod \"machine-api-operator-5694c8668f-8ppbb\" (UID: \"ef3c4a5f-725d-4be0-b800-ab95fba9e33e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.652486 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.660575 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.664188 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.665290 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.679034 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-image-import-ca\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.679129 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6faa0a-407c-485c-9d10-0ed877cdfe30-config\") pod \"machine-approver-56656f9798-dqcjb\" (UID: \"5b6faa0a-407c-485c-9d10-0ed877cdfe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.679238 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v6hf\" (UniqueName: \"kubernetes.io/projected/5b6faa0a-407c-485c-9d10-0ed877cdfe30-kube-api-access-9v6hf\") pod \"machine-approver-56656f9798-dqcjb\" (UID: \"5b6faa0a-407c-485c-9d10-0ed877cdfe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.679275 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc9b874e-9d92-4b60-affa-24d0f2286cb8-serving-cert\") pod \"openshift-config-operator-7777fb866f-6wtc2\" (UID: \"cc9b874e-9d92-4b60-affa-24d0f2286cb8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.679309 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ea57229-2fa9-47b3-a2f1-6c28d9434923-serving-cert\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.679344 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-config\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.679380 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-etcd-serving-ca\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.679404 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5b6faa0a-407c-485c-9d10-0ed877cdfe30-machine-approver-tls\") pod \"machine-approver-56656f9798-dqcjb\" (UID: \"5b6faa0a-407c-485c-9d10-0ed877cdfe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.679434 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d03df1-14b4-4475-944e-bf81e7abca38-config\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.679460 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c58e09f-229a-41a8-814f-d2d919d706f6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6dw8\" (UID: \"9c58e09f-229a-41a8-814f-d2d919d706f6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.679496 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5ea57229-2fa9-47b3-a2f1-6c28d9434923-etcd-client\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.688201 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ea57229-2fa9-47b3-a2f1-6c28d9434923-audit-dir\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.688232 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.690941 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-79k8x"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.691139 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.691699 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b08b40-b2e6-4db4-8cb1-14a48a144f3b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hltf8\" (UID: \"65b08b40-b2e6-4db4-8cb1-14a48a144f3b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.691774 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ea57229-2fa9-47b3-a2f1-6c28d9434923-node-pullsecrets\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.691801 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.691833 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fmtw\" (UniqueName: \"kubernetes.io/projected/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-kube-api-access-5fmtw\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.691854 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b6faa0a-407c-485c-9d10-0ed877cdfe30-auth-proxy-config\") pod \"machine-approver-56656f9798-dqcjb\" (UID: \"5b6faa0a-407c-485c-9d10-0ed877cdfe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.692844 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-79k8x" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693216 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693257 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7p7z\" (UniqueName: \"kubernetes.io/projected/64a0f7cc-6a3a-4604-a964-6fbd123e4d24-kube-api-access-x7p7z\") pod \"machine-config-controller-84d6567774-kvxbc\" (UID: \"64a0f7cc-6a3a-4604-a964-6fbd123e4d24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693293 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c58e09f-229a-41a8-814f-d2d919d706f6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6dw8\" (UID: \"9c58e09f-229a-41a8-814f-d2d919d706f6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693350 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693424 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rjts\" (UniqueName: \"kubernetes.io/projected/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-kube-api-access-6rjts\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693458 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clmhk\" (UniqueName: \"kubernetes.io/projected/ef3c4a5f-725d-4be0-b800-ab95fba9e33e-kube-api-access-clmhk\") pod \"machine-api-operator-5694c8668f-8ppbb\" (UID: \"ef3c4a5f-725d-4be0-b800-ab95fba9e33e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693486 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693567 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-oauth-config\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693598 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef3c4a5f-725d-4be0-b800-ab95fba9e33e-config\") pod \"machine-api-operator-5694c8668f-8ppbb\" (UID: \"ef3c4a5f-725d-4be0-b800-ab95fba9e33e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693619 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-config\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693646 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a371e6-d5dc-4971-8abf-c193da52013c-client-ca\") pod \"route-controller-manager-6576b87f9c-nj5sn\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693666 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a371e6-d5dc-4971-8abf-c193da52013c-serving-cert\") pod \"route-controller-manager-6576b87f9c-nj5sn\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693725 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tphb\" (UniqueName: \"kubernetes.io/projected/b5a371e6-d5dc-4971-8abf-c193da52013c-kube-api-access-8tphb\") pod \"route-controller-manager-6576b87f9c-nj5sn\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693749 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693788 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ca41e21f-75c8-48bc-8611-85bebde78fad-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693810 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a430c60-e09a-473a-8938-c6e67c6fe89f-apiservice-cert\") pod \"packageserver-d55dfcdfc-2zfnn\" (UID: \"3a430c60-e09a-473a-8938-c6e67c6fe89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693834 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj2rl\" (UniqueName: \"kubernetes.io/projected/65b08b40-b2e6-4db4-8cb1-14a48a144f3b-kube-api-access-bj2rl\") pod \"openshift-apiserver-operator-796bbdcf4f-hltf8\" (UID: \"65b08b40-b2e6-4db4-8cb1-14a48a144f3b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693861 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-serving-cert\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693885 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693908 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693932 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a371e6-d5dc-4971-8abf-c193da52013c-config\") pod \"route-controller-manager-6576b87f9c-nj5sn\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693955 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.693983 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-audit-dir\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.694013 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca41e21f-75c8-48bc-8611-85bebde78fad-etcd-client\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.694031 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9f781b6-b4dc-428e-a4b5-c0edca799be2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lcqnw\" (UID: \"c9f781b6-b4dc-428e-a4b5-c0edca799be2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.694050 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feaf053e-d992-479b-b7ac-f7383e0b4b35-serving-cert\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.694073 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9f781b6-b4dc-428e-a4b5-c0edca799be2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lcqnw\" (UID: \"c9f781b6-b4dc-428e-a4b5-c0edca799be2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.694142 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ca41e21f-75c8-48bc-8611-85bebde78fad-encryption-config\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.694167 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z47lw\" (UniqueName: \"kubernetes.io/projected/7fbb0ada-30b2-4b03-bb9a-456f07e78a42-kube-api-access-z47lw\") pod \"console-operator-58897d9998-8jcmm\" (UID: \"7fbb0ada-30b2-4b03-bb9a-456f07e78a42\") " pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.694225 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca41e21f-75c8-48bc-8611-85bebde78fad-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.694253 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef3c4a5f-725d-4be0-b800-ab95fba9e33e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8ppbb\" (UID: \"ef3c4a5f-725d-4be0-b800-ab95fba9e33e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.694276 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fbb0ada-30b2-4b03-bb9a-456f07e78a42-trusted-ca\") pod \"console-operator-58897d9998-8jcmm\" (UID: \"7fbb0ada-30b2-4b03-bb9a-456f07e78a42\") " pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.694300 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxmrq\" (UniqueName: \"kubernetes.io/projected/3a430c60-e09a-473a-8938-c6e67c6fe89f-kube-api-access-hxmrq\") pod \"packageserver-d55dfcdfc-2zfnn\" (UID: \"3a430c60-e09a-473a-8938-c6e67c6fe89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.694355 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a430c60-e09a-473a-8938-c6e67c6fe89f-webhook-cert\") pod \"packageserver-d55dfcdfc-2zfnn\" (UID: \"3a430c60-e09a-473a-8938-c6e67c6fe89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.694382 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-audit-policies\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.694443 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5ea57229-2fa9-47b3-a2f1-6c28d9434923-encryption-config\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.694475 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/285eb7ab-eacb-482f-bafb-45871026d2b1-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qvscx\" (UID: \"285eb7ab-eacb-482f-bafb-45871026d2b1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.695595 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vh5rz"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.696303 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c58e09f-229a-41a8-814f-d2d919d706f6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6dw8\" (UID: \"9c58e09f-229a-41a8-814f-d2d919d706f6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.696401 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285eb7ab-eacb-482f-bafb-45871026d2b1-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qvscx\" (UID: \"285eb7ab-eacb-482f-bafb-45871026d2b1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.696458 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b08b40-b2e6-4db4-8cb1-14a48a144f3b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hltf8\" (UID: \"65b08b40-b2e6-4db4-8cb1-14a48a144f3b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.696532 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d03df1-14b4-4475-944e-bf81e7abca38-service-ca-bundle\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.697076 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzfvj"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.697649 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.697932 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vh5rz" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.698686 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.699894 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.703576 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fv7gf"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.704379 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.705090 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.705189 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fv7gf" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.706761 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.707684 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xgk22"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.708290 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.708739 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.708745 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.709687 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.709943 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.711295 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.713011 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.713625 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.715807 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.716764 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.717050 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.717106 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.717547 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.724810 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vzxzx"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.726626 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.731577 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vd8ml"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.732693 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.743117 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6q42k"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.743319 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.743890 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.745128 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8jcmm"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.747899 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.758775 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.760803 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.760885 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.771557 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zgw9r"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.771742 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-z2gjc"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.773091 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gnmz9"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.775526 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-79k8x"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.775574 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.778520 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cqgww"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.779335 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8ppbb"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.780882 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.782047 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g2dcn"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.783166 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.784101 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.785617 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.786658 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.787618 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-stl87"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.788312 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-stl87" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.788656 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n2td9"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.789907 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.790887 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.791924 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.793017 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzfvj"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.794137 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.796516 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vh5rz"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.797122 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca41e21f-75c8-48bc-8611-85bebde78fad-etcd-client\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.797158 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feaf053e-d992-479b-b7ac-f7383e0b4b35-serving-cert\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.797181 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9f781b6-b4dc-428e-a4b5-c0edca799be2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lcqnw\" (UID: \"c9f781b6-b4dc-428e-a4b5-c0edca799be2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.797200 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a430c60-e09a-473a-8938-c6e67c6fe89f-webhook-cert\") pod \"packageserver-d55dfcdfc-2zfnn\" (UID: \"3a430c60-e09a-473a-8938-c6e67c6fe89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.797226 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c30629f6-a476-415a-9fae-6c70598bd3c3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dz8b4\" (UID: \"c30629f6-a476-415a-9fae-6c70598bd3c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.797244 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5ea57229-2fa9-47b3-a2f1-6c28d9434923-encryption-config\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.797293 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d03df1-14b4-4475-944e-bf81e7abca38-service-ca-bundle\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.797312 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c58e09f-229a-41a8-814f-d2d919d706f6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6dw8\" (UID: \"9c58e09f-229a-41a8-814f-d2d919d706f6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.797329 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/285eb7ab-eacb-482f-bafb-45871026d2b1-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qvscx\" (UID: \"285eb7ab-eacb-482f-bafb-45871026d2b1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.797345 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b08b40-b2e6-4db4-8cb1-14a48a144f3b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hltf8\" (UID: \"65b08b40-b2e6-4db4-8cb1-14a48a144f3b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.797387 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a587792-e86e-434f-873e-c7ce3aac8bce-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fzfvj\" (UID: \"9a587792-e86e-434f-873e-c7ce3aac8bce\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.798337 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcr6g\" (UniqueName: \"kubernetes.io/projected/c9f781b6-b4dc-428e-a4b5-c0edca799be2-kube-api-access-gcr6g\") pod \"cluster-image-registry-operator-dc59b4c8b-lcqnw\" (UID: \"c9f781b6-b4dc-428e-a4b5-c0edca799be2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800036 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-client-ca\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800078 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d03df1-14b4-4475-944e-bf81e7abca38-service-ca-bundle\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800168 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65b08b40-b2e6-4db4-8cb1-14a48a144f3b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hltf8\" (UID: \"65b08b40-b2e6-4db4-8cb1-14a48a144f3b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800440 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81d03df1-14b4-4475-944e-bf81e7abca38-serving-cert\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800514 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm5nz\" (UniqueName: \"kubernetes.io/projected/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-kube-api-access-sm5nz\") pod \"catalog-operator-68c6474976-tkj4d\" (UID: \"7ba6c64d-c248-4150-93c7-5acf1fcbadfd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800560 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-service-ca\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800599 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn7vn\" (UniqueName: \"kubernetes.io/projected/70e61761-82dd-4ac8-a847-1727769f4424-kube-api-access-nn7vn\") pod \"collect-profiles-29493540-2f7hx\" (UID: \"70e61761-82dd-4ac8-a847-1727769f4424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800638 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-config\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800684 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klw2s\" (UniqueName: \"kubernetes.io/projected/e1399bb5-4202-4d0e-aac3-83bec9d52d2d-kube-api-access-klw2s\") pod \"downloads-7954f5f757-z2gjc\" (UID: \"e1399bb5-4202-4d0e-aac3-83bec9d52d2d\") " pod="openshift-console/downloads-7954f5f757-z2gjc" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800727 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca41e21f-75c8-48bc-8611-85bebde78fad-serving-cert\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800775 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fbb0ada-30b2-4b03-bb9a-456f07e78a42-serving-cert\") pod \"console-operator-58897d9998-8jcmm\" (UID: \"7fbb0ada-30b2-4b03-bb9a-456f07e78a42\") " pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800811 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttfxr\" (UniqueName: \"kubernetes.io/projected/5ea57229-2fa9-47b3-a2f1-6c28d9434923-kube-api-access-ttfxr\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800845 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800882 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800915 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bc93c92-2229-4d87-919d-d4104cf7bcab-config\") pod \"service-ca-operator-777779d784-cqgww\" (UID: \"1bc93c92-2229-4d87-919d-d4104cf7bcab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cqgww" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800958 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-oauth-serving-cert\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.800989 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x7w9\" (UniqueName: \"kubernetes.io/projected/cc9b874e-9d92-4b60-affa-24d0f2286cb8-kube-api-access-4x7w9\") pod \"openshift-config-operator-7777fb866f-6wtc2\" (UID: \"cc9b874e-9d92-4b60-affa-24d0f2286cb8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801024 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-audit\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801083 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc8zh\" (UniqueName: \"kubernetes.io/projected/73c142b0-ef25-4567-a816-965a127760af-kube-api-access-jc8zh\") pod \"migrator-59844c95c7-vh5rz\" (UID: \"73c142b0-ef25-4567-a816-965a127760af\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vh5rz" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801129 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64a0f7cc-6a3a-4604-a964-6fbd123e4d24-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kvxbc\" (UID: \"64a0f7cc-6a3a-4604-a964-6fbd123e4d24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801132 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-client-ca\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801159 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d03df1-14b4-4475-944e-bf81e7abca38-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801208 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nhwwk\" (UID: \"d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801259 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6faa0a-407c-485c-9d10-0ed877cdfe30-config\") pod \"machine-approver-56656f9798-dqcjb\" (UID: \"5b6faa0a-407c-485c-9d10-0ed877cdfe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801327 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-image-import-ca\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801407 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801459 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ea57229-2fa9-47b3-a2f1-6c28d9434923-serving-cert\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801525 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4v24\" (UniqueName: \"kubernetes.io/projected/ec119cba-64e9-448f-8fa8-da55fd66884f-kube-api-access-r4v24\") pod \"ingress-canary-79k8x\" (UID: \"ec119cba-64e9-448f-8fa8-da55fd66884f\") " pod="openshift-ingress-canary/ingress-canary-79k8x" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801568 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-etcd-serving-ca\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801602 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d03df1-14b4-4475-944e-bf81e7abca38-config\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801636 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70e61761-82dd-4ac8-a847-1727769f4424-secret-volume\") pod \"collect-profiles-29493540-2f7hx\" (UID: \"70e61761-82dd-4ac8-a847-1727769f4424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801671 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96a7f787-34e0-4f85-9db1-33722d80495c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nxd8x\" (UID: \"96a7f787-34e0-4f85-9db1-33722d80495c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801702 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5ea57229-2fa9-47b3-a2f1-6c28d9434923-etcd-client\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801729 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ea57229-2fa9-47b3-a2f1-6c28d9434923-node-pullsecrets\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801762 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801805 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c4e89991-7235-4188-8c4a-36d2dc3945f5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-r9mpt\" (UID: \"c4e89991-7235-4188-8c4a-36d2dc3945f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801837 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec119cba-64e9-448f-8fa8-da55fd66884f-cert\") pod \"ingress-canary-79k8x\" (UID: \"ec119cba-64e9-448f-8fa8-da55fd66884f\") " pod="openshift-ingress-canary/ingress-canary-79k8x" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801868 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801900 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f8hw\" (UniqueName: \"kubernetes.io/projected/d7dc96ec-9b60-46a3-b120-5b75ba5e7124-kube-api-access-9f8hw\") pod \"dns-operator-744455d44c-5cr6t\" (UID: \"d7dc96ec-9b60-46a3-b120-5b75ba5e7124\") " pod="openshift-dns-operator/dns-operator-744455d44c-5cr6t" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801933 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76kvv\" (UniqueName: \"kubernetes.io/projected/912dd730-f999-4811-bf47-485755b7d949-kube-api-access-76kvv\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.801966 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.802002 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a587792-e86e-434f-873e-c7ce3aac8bce-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fzfvj\" (UID: \"9a587792-e86e-434f-873e-c7ce3aac8bce\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.802035 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rjts\" (UniqueName: \"kubernetes.io/projected/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-kube-api-access-6rjts\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.802075 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03627d33-2baf-4ffe-9af2-ad83eb61dd9c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gr6td\" (UID: \"03627d33-2baf-4ffe-9af2-ad83eb61dd9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.802112 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-config\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.802144 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj295\" (UniqueName: \"kubernetes.io/projected/c4e89991-7235-4188-8c4a-36d2dc3945f5-kube-api-access-mj295\") pod \"olm-operator-6b444d44fb-r9mpt\" (UID: \"c4e89991-7235-4188-8c4a-36d2dc3945f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.802170 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c158949d-4568-4cc2-8e24-8f5f24069664-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n2td9\" (UID: \"c158949d-4568-4cc2-8e24-8f5f24069664\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n2td9" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.802187 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-audit\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.802233 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-oauth-config\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.802277 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a371e6-d5dc-4971-8abf-c193da52013c-serving-cert\") pod \"route-controller-manager-6576b87f9c-nj5sn\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.802311 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.802312 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-service-ca\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.802317 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ca41e21f-75c8-48bc-8611-85bebde78fad-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.802386 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ea57229-2fa9-47b3-a2f1-6c28d9434923-node-pullsecrets\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.802738 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj2rl\" (UniqueName: \"kubernetes.io/projected/65b08b40-b2e6-4db4-8cb1-14a48a144f3b-kube-api-access-bj2rl\") pod \"openshift-apiserver-operator-796bbdcf4f-hltf8\" (UID: \"65b08b40-b2e6-4db4-8cb1-14a48a144f3b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.803310 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ca41e21f-75c8-48bc-8611-85bebde78fad-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.803318 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-oauth-serving-cert\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.804138 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6faa0a-407c-485c-9d10-0ed877cdfe30-config\") pod \"machine-approver-56656f9798-dqcjb\" (UID: \"5b6faa0a-407c-485c-9d10-0ed877cdfe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.804549 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-etcd-serving-ca\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.806088 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.806491 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-image-import-ca\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.806686 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.807519 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-config\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.807812 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.808017 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feaf053e-d992-479b-b7ac-f7383e0b4b35-serving-cert\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.808126 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-serving-cert\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.808169 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.808206 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7dc96ec-9b60-46a3-b120-5b75ba5e7124-metrics-tls\") pod \"dns-operator-744455d44c-5cr6t\" (UID: \"d7dc96ec-9b60-46a3-b120-5b75ba5e7124\") " pod="openshift-dns-operator/dns-operator-744455d44c-5cr6t" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.808241 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9f781b6-b4dc-428e-a4b5-c0edca799be2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lcqnw\" (UID: \"c9f781b6-b4dc-428e-a4b5-c0edca799be2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.808262 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03627d33-2baf-4ffe-9af2-ad83eb61dd9c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gr6td\" (UID: \"03627d33-2baf-4ffe-9af2-ad83eb61dd9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.808486 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-metrics-certs\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.808529 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ca41e21f-75c8-48bc-8611-85bebde78fad-encryption-config\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.808553 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z47lw\" (UniqueName: \"kubernetes.io/projected/7fbb0ada-30b2-4b03-bb9a-456f07e78a42-kube-api-access-z47lw\") pod \"console-operator-58897d9998-8jcmm\" (UID: \"7fbb0ada-30b2-4b03-bb9a-456f07e78a42\") " pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.808576 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxmrq\" (UniqueName: \"kubernetes.io/projected/3a430c60-e09a-473a-8938-c6e67c6fe89f-kube-api-access-hxmrq\") pod \"packageserver-d55dfcdfc-2zfnn\" (UID: \"3a430c60-e09a-473a-8938-c6e67c6fe89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.808816 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca41e21f-75c8-48bc-8611-85bebde78fad-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.808853 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef3c4a5f-725d-4be0-b800-ab95fba9e33e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8ppbb\" (UID: \"ef3c4a5f-725d-4be0-b800-ab95fba9e33e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.808874 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fbb0ada-30b2-4b03-bb9a-456f07e78a42-trusted-ca\") pod \"console-operator-58897d9998-8jcmm\" (UID: \"7fbb0ada-30b2-4b03-bb9a-456f07e78a42\") " pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.808897 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1ab80115-9e4f-48a1-8c19-a89f554962cb-signing-cabundle\") pod \"service-ca-9c57cc56f-fv7gf\" (UID: \"1ab80115-9e4f-48a1-8c19-a89f554962cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-fv7gf" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.809524 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-config\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.810130 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81d03df1-14b4-4475-944e-bf81e7abca38-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.810406 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-nhwwk\" (UID: \"d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.810538 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca41e21f-75c8-48bc-8611-85bebde78fad-etcd-client\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.810885 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca41e21f-75c8-48bc-8611-85bebde78fad-serving-cert\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.811278 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.811528 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.811575 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.811841 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/285eb7ab-eacb-482f-bafb-45871026d2b1-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-qvscx\" (UID: \"285eb7ab-eacb-482f-bafb-45871026d2b1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.811853 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-audit-policies\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.812014 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca41e21f-75c8-48bc-8611-85bebde78fad-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.811975 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70e61761-82dd-4ac8-a847-1727769f4424-config-volume\") pod \"collect-profiles-29493540-2f7hx\" (UID: \"70e61761-82dd-4ac8-a847-1727769f4424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.812086 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a7f787-34e0-4f85-9db1-33722d80495c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nxd8x\" (UID: \"96a7f787-34e0-4f85-9db1-33722d80495c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.812148 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285eb7ab-eacb-482f-bafb-45871026d2b1-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qvscx\" (UID: \"285eb7ab-eacb-482f-bafb-45871026d2b1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.812335 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2x8x\" (UniqueName: \"kubernetes.io/projected/1ab80115-9e4f-48a1-8c19-a89f554962cb-kube-api-access-x2x8x\") pod \"service-ca-9c57cc56f-fv7gf\" (UID: \"1ab80115-9e4f-48a1-8c19-a89f554962cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-fv7gf" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.812405 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-stats-auth\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.812443 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca41e21f-75c8-48bc-8611-85bebde78fad-audit-policies\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.812479 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bdn99\" (UID: \"36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.812702 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwnb8\" (UniqueName: \"kubernetes.io/projected/81d03df1-14b4-4475-944e-bf81e7abca38-kube-api-access-wwnb8\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.812864 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.812997 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cc9b874e-9d92-4b60-affa-24d0f2286cb8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6wtc2\" (UID: \"cc9b874e-9d92-4b60-affa-24d0f2286cb8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.813066 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvdlk\" (UniqueName: \"kubernetes.io/projected/285eb7ab-eacb-482f-bafb-45871026d2b1-kube-api-access-lvdlk\") pod \"openshift-controller-manager-operator-756b6f6bc6-qvscx\" (UID: \"285eb7ab-eacb-482f-bafb-45871026d2b1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.813185 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fbb0ada-30b2-4b03-bb9a-456f07e78a42-config\") pod \"console-operator-58897d9998-8jcmm\" (UID: \"7fbb0ada-30b2-4b03-bb9a-456f07e78a42\") " pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.813823 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64a0f7cc-6a3a-4604-a964-6fbd123e4d24-proxy-tls\") pod \"machine-config-controller-84d6567774-kvxbc\" (UID: \"64a0f7cc-6a3a-4604-a964-6fbd123e4d24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.813902 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1ab80115-9e4f-48a1-8c19-a89f554962cb-signing-key\") pod \"service-ca-9c57cc56f-fv7gf\" (UID: \"1ab80115-9e4f-48a1-8c19-a89f554962cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-fv7gf" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.813935 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25rn8\" (UniqueName: \"kubernetes.io/projected/c158949d-4568-4cc2-8e24-8f5f24069664-kube-api-access-25rn8\") pod \"multus-admission-controller-857f4d67dd-n2td9\" (UID: \"c158949d-4568-4cc2-8e24-8f5f24069664\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n2td9" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.814006 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca41e21f-75c8-48bc-8611-85bebde78fad-audit-dir\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.814058 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81d03df1-14b4-4475-944e-bf81e7abca38-config\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.814390 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fbb0ada-30b2-4b03-bb9a-456f07e78a42-trusted-ca\") pod \"console-operator-58897d9998-8jcmm\" (UID: \"7fbb0ada-30b2-4b03-bb9a-456f07e78a42\") " pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.814432 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5ea57229-2fa9-47b3-a2f1-6c28d9434923-etcd-client\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.814581 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9f781b6-b4dc-428e-a4b5-c0edca799be2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-lcqnw\" (UID: \"c9f781b6-b4dc-428e-a4b5-c0edca799be2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.814642 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.814900 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca41e21f-75c8-48bc-8611-85bebde78fad-audit-policies\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.815337 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-serving-cert\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.815444 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-audit-policies\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.815621 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-trusted-ca-bundle\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.815775 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz86w\" (UniqueName: \"kubernetes.io/projected/d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d-kube-api-access-qz86w\") pod \"cluster-samples-operator-665b6dd947-nhwwk\" (UID: \"d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.815991 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca41e21f-75c8-48bc-8611-85bebde78fad-audit-dir\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.816096 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/285eb7ab-eacb-482f-bafb-45871026d2b1-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-qvscx\" (UID: \"285eb7ab-eacb-482f-bafb-45871026d2b1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.816838 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cc9b874e-9d92-4b60-affa-24d0f2286cb8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6wtc2\" (UID: \"cc9b874e-9d92-4b60-affa-24d0f2286cb8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.816938 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7r8x\" (UniqueName: \"kubernetes.io/projected/feaf053e-d992-479b-b7ac-f7383e0b4b35-kube-api-access-k7r8x\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.816984 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmnxp\" (UniqueName: \"kubernetes.io/projected/1bc93c92-2229-4d87-919d-d4104cf7bcab-kube-api-access-qmnxp\") pod \"service-ca-operator-777779d784-cqgww\" (UID: \"1bc93c92-2229-4d87-919d-d4104cf7bcab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cqgww" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.817327 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/64a0f7cc-6a3a-4604-a964-6fbd123e4d24-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kvxbc\" (UID: \"64a0f7cc-6a3a-4604-a964-6fbd123e4d24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.817787 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a430c60-e09a-473a-8938-c6e67c6fe89f-tmpfs\") pod \"packageserver-d55dfcdfc-2zfnn\" (UID: \"3a430c60-e09a-473a-8938-c6e67c6fe89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.817837 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f781b6-b4dc-428e-a4b5-c0edca799be2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lcqnw\" (UID: \"c9f781b6-b4dc-428e-a4b5-c0edca799be2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.817873 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqq8s\" (UniqueName: \"kubernetes.io/projected/ca41e21f-75c8-48bc-8611-85bebde78fad-kube-api-access-fqq8s\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.817906 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ef3c4a5f-725d-4be0-b800-ab95fba9e33e-images\") pod \"machine-api-operator-5694c8668f-8ppbb\" (UID: \"ef3c4a5f-725d-4be0-b800-ab95fba9e33e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.817936 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v6hf\" (UniqueName: \"kubernetes.io/projected/5b6faa0a-407c-485c-9d10-0ed877cdfe30-kube-api-access-9v6hf\") pod \"machine-approver-56656f9798-dqcjb\" (UID: \"5b6faa0a-407c-485c-9d10-0ed877cdfe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.817991 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ea57229-2fa9-47b3-a2f1-6c28d9434923-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818038 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5cr6t"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818072 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bc93c92-2229-4d87-919d-d4104cf7bcab-serving-cert\") pod \"service-ca-operator-777779d784-cqgww\" (UID: \"1bc93c92-2229-4d87-919d-d4104cf7bcab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cqgww" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818080 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ea57229-2fa9-47b3-a2f1-6c28d9434923-serving-cert\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818108 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc9b874e-9d92-4b60-affa-24d0f2286cb8-serving-cert\") pod \"openshift-config-operator-7777fb866f-6wtc2\" (UID: \"cc9b874e-9d92-4b60-affa-24d0f2286cb8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818226 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhc76\" (UniqueName: \"kubernetes.io/projected/c30629f6-a476-415a-9fae-6c70598bd3c3-kube-api-access-fhc76\") pod \"package-server-manager-789f6589d5-dz8b4\" (UID: \"c30629f6-a476-415a-9fae-6c70598bd3c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818275 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-config\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818429 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5b6faa0a-407c-485c-9d10-0ed877cdfe30-machine-approver-tls\") pod \"machine-approver-56656f9798-dqcjb\" (UID: \"5b6faa0a-407c-485c-9d10-0ed877cdfe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818503 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcdwb\" (UniqueName: \"kubernetes.io/projected/9a587792-e86e-434f-873e-c7ce3aac8bce-kube-api-access-zcdwb\") pod \"marketplace-operator-79b997595-fzfvj\" (UID: \"9a587792-e86e-434f-873e-c7ce3aac8bce\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818639 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tdch\" (UniqueName: \"kubernetes.io/projected/36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6-kube-api-access-5tdch\") pod \"control-plane-machine-set-operator-78cbb6b69f-bdn99\" (UID: \"36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818723 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c58e09f-229a-41a8-814f-d2d919d706f6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6dw8\" (UID: \"9c58e09f-229a-41a8-814f-d2d919d706f6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818796 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ea57229-2fa9-47b3-a2f1-6c28d9434923-audit-dir\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818826 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818913 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-config\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818941 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fmtw\" (UniqueName: \"kubernetes.io/projected/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-kube-api-access-5fmtw\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.818974 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b6faa0a-407c-485c-9d10-0ed877cdfe30-auth-proxy-config\") pod \"machine-approver-56656f9798-dqcjb\" (UID: \"5b6faa0a-407c-485c-9d10-0ed877cdfe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.819018 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b08b40-b2e6-4db4-8cb1-14a48a144f3b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hltf8\" (UID: \"65b08b40-b2e6-4db4-8cb1-14a48a144f3b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.819089 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fbb0ada-30b2-4b03-bb9a-456f07e78a42-config\") pod \"console-operator-58897d9998-8jcmm\" (UID: \"7fbb0ada-30b2-4b03-bb9a-456f07e78a42\") " pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.819096 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-srv-cert\") pod \"catalog-operator-68c6474976-tkj4d\" (UID: \"7ba6c64d-c248-4150-93c7-5acf1fcbadfd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.819162 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03627d33-2baf-4ffe-9af2-ad83eb61dd9c-config\") pod \"kube-apiserver-operator-766d6c64bb-gr6td\" (UID: \"03627d33-2baf-4ffe-9af2-ad83eb61dd9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.819256 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7p7z\" (UniqueName: \"kubernetes.io/projected/64a0f7cc-6a3a-4604-a964-6fbd123e4d24-kube-api-access-x7p7z\") pod \"machine-config-controller-84d6567774-kvxbc\" (UID: \"64a0f7cc-6a3a-4604-a964-6fbd123e4d24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.819290 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-profile-collector-cert\") pod \"catalog-operator-68c6474976-tkj4d\" (UID: \"7ba6c64d-c248-4150-93c7-5acf1fcbadfd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.819324 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c58e09f-229a-41a8-814f-d2d919d706f6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6dw8\" (UID: \"9c58e09f-229a-41a8-814f-d2d919d706f6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.820167 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a430c60-e09a-473a-8938-c6e67c6fe89f-tmpfs\") pod \"packageserver-d55dfcdfc-2zfnn\" (UID: \"3a430c60-e09a-473a-8938-c6e67c6fe89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.820444 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.821977 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ea57229-2fa9-47b3-a2f1-6c28d9434923-audit-dir\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.823214 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c58e09f-229a-41a8-814f-d2d919d706f6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6dw8\" (UID: \"9c58e09f-229a-41a8-814f-d2d919d706f6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.823824 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-trusted-ca-bundle\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.823861 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5ea57229-2fa9-47b3-a2f1-6c28d9434923-encryption-config\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.823932 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b6faa0a-407c-485c-9d10-0ed877cdfe30-auth-proxy-config\") pod \"machine-approver-56656f9798-dqcjb\" (UID: \"5b6faa0a-407c-485c-9d10-0ed877cdfe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.824206 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65b08b40-b2e6-4db4-8cb1-14a48a144f3b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hltf8\" (UID: \"65b08b40-b2e6-4db4-8cb1-14a48a144f3b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.824229 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-oauth-config\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.824418 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ca41e21f-75c8-48bc-8611-85bebde78fad-encryption-config\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.825749 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.829083 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc9b874e-9d92-4b60-affa-24d0f2286cb8-serving-cert\") pod \"openshift-config-operator-7777fb866f-6wtc2\" (UID: \"cc9b874e-9d92-4b60-affa-24d0f2286cb8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.821987 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clmhk\" (UniqueName: \"kubernetes.io/projected/ef3c4a5f-725d-4be0-b800-ab95fba9e33e-kube-api-access-clmhk\") pod \"machine-api-operator-5694c8668f-8ppbb\" (UID: \"ef3c4a5f-725d-4be0-b800-ab95fba9e33e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.834581 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5b6faa0a-407c-485c-9d10-0ed877cdfe30-machine-approver-tls\") pod \"machine-approver-56656f9798-dqcjb\" (UID: \"5b6faa0a-407c-485c-9d10-0ed877cdfe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.834916 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef3c4a5f-725d-4be0-b800-ab95fba9e33e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8ppbb\" (UID: \"ef3c4a5f-725d-4be0-b800-ab95fba9e33e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.834992 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.835029 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef3c4a5f-725d-4be0-b800-ab95fba9e33e-config\") pod \"machine-api-operator-5694c8668f-8ppbb\" (UID: \"ef3c4a5f-725d-4be0-b800-ab95fba9e33e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.835070 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq6sf\" (UniqueName: \"kubernetes.io/projected/96a7f787-34e0-4f85-9db1-33722d80495c-kube-api-access-dq6sf\") pod \"kube-storage-version-migrator-operator-b67b599dd-nxd8x\" (UID: \"96a7f787-34e0-4f85-9db1-33722d80495c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.835075 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ef3c4a5f-725d-4be0-b800-ab95fba9e33e-images\") pod \"machine-api-operator-5694c8668f-8ppbb\" (UID: \"ef3c4a5f-725d-4be0-b800-ab95fba9e33e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.835103 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a371e6-d5dc-4971-8abf-c193da52013c-client-ca\") pod \"route-controller-manager-6576b87f9c-nj5sn\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.835132 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tphb\" (UniqueName: \"kubernetes.io/projected/b5a371e6-d5dc-4971-8abf-c193da52013c-kube-api-access-8tphb\") pod \"route-controller-manager-6576b87f9c-nj5sn\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.836555 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.836887 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c4e89991-7235-4188-8c4a-36d2dc3945f5-srv-cert\") pod \"olm-operator-6b444d44fb-r9mpt\" (UID: \"c4e89991-7235-4188-8c4a-36d2dc3945f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.836925 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-default-certificate\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.836960 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a430c60-e09a-473a-8938-c6e67c6fe89f-apiservice-cert\") pod \"packageserver-d55dfcdfc-2zfnn\" (UID: \"3a430c60-e09a-473a-8938-c6e67c6fe89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.836968 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.837059 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f781b6-b4dc-428e-a4b5-c0edca799be2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-lcqnw\" (UID: \"c9f781b6-b4dc-428e-a4b5-c0edca799be2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.837237 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.837279 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.837310 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a371e6-d5dc-4971-8abf-c193da52013c-config\") pod \"route-controller-manager-6576b87f9c-nj5sn\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.837322 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-d5fwk"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.837962 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a371e6-d5dc-4971-8abf-c193da52013c-client-ca\") pod \"route-controller-manager-6576b87f9c-nj5sn\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.837989 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a371e6-d5dc-4971-8abf-c193da52013c-serving-cert\") pod \"route-controller-manager-6576b87f9c-nj5sn\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.838397 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-d5fwk" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.838434 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.837331 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.838880 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c58e09f-229a-41a8-814f-d2d919d706f6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6dw8\" (UID: \"9c58e09f-229a-41a8-814f-d2d919d706f6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.839046 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.839100 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81d03df1-14b4-4475-944e-bf81e7abca38-serving-cert\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.839309 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/912dd730-f999-4811-bf47-485755b7d949-service-ca-bundle\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.839448 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef3c4a5f-725d-4be0-b800-ab95fba9e33e-config\") pod \"machine-api-operator-5694c8668f-8ppbb\" (UID: \"ef3c4a5f-725d-4be0-b800-ab95fba9e33e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.839689 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-audit-dir\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.839739 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fbb0ada-30b2-4b03-bb9a-456f07e78a42-serving-cert\") pod \"console-operator-58897d9998-8jcmm\" (UID: \"7fbb0ada-30b2-4b03-bb9a-456f07e78a42\") " pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.839862 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a371e6-d5dc-4971-8abf-c193da52013c-config\") pod \"route-controller-manager-6576b87f9c-nj5sn\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.839889 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-audit-dir\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.840021 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/64a0f7cc-6a3a-4604-a964-6fbd123e4d24-proxy-tls\") pod \"machine-config-controller-84d6567774-kvxbc\" (UID: \"64a0f7cc-6a3a-4604-a964-6fbd123e4d24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.840156 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.840560 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.841224 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a430c60-e09a-473a-8938-c6e67c6fe89f-apiservice-cert\") pod \"packageserver-d55dfcdfc-2zfnn\" (UID: \"3a430c60-e09a-473a-8938-c6e67c6fe89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.841612 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n2xxg"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.842174 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.842869 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a430c60-e09a-473a-8938-c6e67c6fe89f-webhook-cert\") pod \"packageserver-d55dfcdfc-2zfnn\" (UID: \"3a430c60-e09a-473a-8938-c6e67c6fe89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.843206 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fv7gf"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.843297 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.844577 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-d5fwk"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.845299 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n2xxg"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.846824 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2"] Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.860831 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.880083 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.900081 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.921190 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.941073 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1ab80115-9e4f-48a1-8c19-a89f554962cb-signing-key\") pod \"service-ca-9c57cc56f-fv7gf\" (UID: \"1ab80115-9e4f-48a1-8c19-a89f554962cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-fv7gf" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.941134 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25rn8\" (UniqueName: \"kubernetes.io/projected/c158949d-4568-4cc2-8e24-8f5f24069664-kube-api-access-25rn8\") pod \"multus-admission-controller-857f4d67dd-n2td9\" (UID: \"c158949d-4568-4cc2-8e24-8f5f24069664\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n2td9" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.941187 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmnxp\" (UniqueName: \"kubernetes.io/projected/1bc93c92-2229-4d87-919d-d4104cf7bcab-kube-api-access-qmnxp\") pod \"service-ca-operator-777779d784-cqgww\" (UID: \"1bc93c92-2229-4d87-919d-d4104cf7bcab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cqgww" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.941425 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.941616 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bc93c92-2229-4d87-919d-d4104cf7bcab-serving-cert\") pod \"service-ca-operator-777779d784-cqgww\" (UID: \"1bc93c92-2229-4d87-919d-d4104cf7bcab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cqgww" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.941695 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhc76\" (UniqueName: \"kubernetes.io/projected/c30629f6-a476-415a-9fae-6c70598bd3c3-kube-api-access-fhc76\") pod \"package-server-manager-789f6589d5-dz8b4\" (UID: \"c30629f6-a476-415a-9fae-6c70598bd3c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.941740 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcdwb\" (UniqueName: \"kubernetes.io/projected/9a587792-e86e-434f-873e-c7ce3aac8bce-kube-api-access-zcdwb\") pod \"marketplace-operator-79b997595-fzfvj\" (UID: \"9a587792-e86e-434f-873e-c7ce3aac8bce\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.941774 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tdch\" (UniqueName: \"kubernetes.io/projected/36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6-kube-api-access-5tdch\") pod \"control-plane-machine-set-operator-78cbb6b69f-bdn99\" (UID: \"36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.941846 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-srv-cert\") pod \"catalog-operator-68c6474976-tkj4d\" (UID: \"7ba6c64d-c248-4150-93c7-5acf1fcbadfd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.941879 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03627d33-2baf-4ffe-9af2-ad83eb61dd9c-config\") pod \"kube-apiserver-operator-766d6c64bb-gr6td\" (UID: \"03627d33-2baf-4ffe-9af2-ad83eb61dd9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.941928 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-profile-collector-cert\") pod \"catalog-operator-68c6474976-tkj4d\" (UID: \"7ba6c64d-c248-4150-93c7-5acf1fcbadfd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.941969 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq6sf\" (UniqueName: \"kubernetes.io/projected/96a7f787-34e0-4f85-9db1-33722d80495c-kube-api-access-dq6sf\") pod \"kube-storage-version-migrator-operator-b67b599dd-nxd8x\" (UID: \"96a7f787-34e0-4f85-9db1-33722d80495c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942010 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c4e89991-7235-4188-8c4a-36d2dc3945f5-srv-cert\") pod \"olm-operator-6b444d44fb-r9mpt\" (UID: \"c4e89991-7235-4188-8c4a-36d2dc3945f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942034 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-default-certificate\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942068 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/912dd730-f999-4811-bf47-485755b7d949-service-ca-bundle\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942110 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c30629f6-a476-415a-9fae-6c70598bd3c3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dz8b4\" (UID: \"c30629f6-a476-415a-9fae-6c70598bd3c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942192 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a587792-e86e-434f-873e-c7ce3aac8bce-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fzfvj\" (UID: \"9a587792-e86e-434f-873e-c7ce3aac8bce\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942233 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm5nz\" (UniqueName: \"kubernetes.io/projected/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-kube-api-access-sm5nz\") pod \"catalog-operator-68c6474976-tkj4d\" (UID: \"7ba6c64d-c248-4150-93c7-5acf1fcbadfd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942288 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn7vn\" (UniqueName: \"kubernetes.io/projected/70e61761-82dd-4ac8-a847-1727769f4424-kube-api-access-nn7vn\") pod \"collect-profiles-29493540-2f7hx\" (UID: \"70e61761-82dd-4ac8-a847-1727769f4424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942347 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bc93c92-2229-4d87-919d-d4104cf7bcab-config\") pod \"service-ca-operator-777779d784-cqgww\" (UID: \"1bc93c92-2229-4d87-919d-d4104cf7bcab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cqgww" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942395 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc8zh\" (UniqueName: \"kubernetes.io/projected/73c142b0-ef25-4567-a816-965a127760af-kube-api-access-jc8zh\") pod \"migrator-59844c95c7-vh5rz\" (UID: \"73c142b0-ef25-4567-a816-965a127760af\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vh5rz" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942436 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70e61761-82dd-4ac8-a847-1727769f4424-secret-volume\") pod \"collect-profiles-29493540-2f7hx\" (UID: \"70e61761-82dd-4ac8-a847-1727769f4424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942483 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4v24\" (UniqueName: \"kubernetes.io/projected/ec119cba-64e9-448f-8fa8-da55fd66884f-kube-api-access-r4v24\") pod \"ingress-canary-79k8x\" (UID: \"ec119cba-64e9-448f-8fa8-da55fd66884f\") " pod="openshift-ingress-canary/ingress-canary-79k8x" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942537 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96a7f787-34e0-4f85-9db1-33722d80495c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nxd8x\" (UID: \"96a7f787-34e0-4f85-9db1-33722d80495c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942571 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c4e89991-7235-4188-8c4a-36d2dc3945f5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-r9mpt\" (UID: \"c4e89991-7235-4188-8c4a-36d2dc3945f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942605 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f8hw\" (UniqueName: \"kubernetes.io/projected/d7dc96ec-9b60-46a3-b120-5b75ba5e7124-kube-api-access-9f8hw\") pod \"dns-operator-744455d44c-5cr6t\" (UID: \"d7dc96ec-9b60-46a3-b120-5b75ba5e7124\") " pod="openshift-dns-operator/dns-operator-744455d44c-5cr6t" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942642 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76kvv\" (UniqueName: \"kubernetes.io/projected/912dd730-f999-4811-bf47-485755b7d949-kube-api-access-76kvv\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942671 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec119cba-64e9-448f-8fa8-da55fd66884f-cert\") pod \"ingress-canary-79k8x\" (UID: \"ec119cba-64e9-448f-8fa8-da55fd66884f\") " pod="openshift-ingress-canary/ingress-canary-79k8x" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942701 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a587792-e86e-434f-873e-c7ce3aac8bce-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fzfvj\" (UID: \"9a587792-e86e-434f-873e-c7ce3aac8bce\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942741 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03627d33-2baf-4ffe-9af2-ad83eb61dd9c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gr6td\" (UID: \"03627d33-2baf-4ffe-9af2-ad83eb61dd9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942804 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj295\" (UniqueName: \"kubernetes.io/projected/c4e89991-7235-4188-8c4a-36d2dc3945f5-kube-api-access-mj295\") pod \"olm-operator-6b444d44fb-r9mpt\" (UID: \"c4e89991-7235-4188-8c4a-36d2dc3945f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.942944 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c158949d-4568-4cc2-8e24-8f5f24069664-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n2td9\" (UID: \"c158949d-4568-4cc2-8e24-8f5f24069664\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n2td9" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.943085 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7dc96ec-9b60-46a3-b120-5b75ba5e7124-metrics-tls\") pod \"dns-operator-744455d44c-5cr6t\" (UID: \"d7dc96ec-9b60-46a3-b120-5b75ba5e7124\") " pod="openshift-dns-operator/dns-operator-744455d44c-5cr6t" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.943120 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03627d33-2baf-4ffe-9af2-ad83eb61dd9c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gr6td\" (UID: \"03627d33-2baf-4ffe-9af2-ad83eb61dd9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.943152 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-metrics-certs\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.943200 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70e61761-82dd-4ac8-a847-1727769f4424-config-volume\") pod \"collect-profiles-29493540-2f7hx\" (UID: \"70e61761-82dd-4ac8-a847-1727769f4424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.943227 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a7f787-34e0-4f85-9db1-33722d80495c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nxd8x\" (UID: \"96a7f787-34e0-4f85-9db1-33722d80495c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.943254 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1ab80115-9e4f-48a1-8c19-a89f554962cb-signing-cabundle\") pod \"service-ca-9c57cc56f-fv7gf\" (UID: \"1ab80115-9e4f-48a1-8c19-a89f554962cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-fv7gf" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.943349 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2x8x\" (UniqueName: \"kubernetes.io/projected/1ab80115-9e4f-48a1-8c19-a89f554962cb-kube-api-access-x2x8x\") pod \"service-ca-9c57cc56f-fv7gf\" (UID: \"1ab80115-9e4f-48a1-8c19-a89f554962cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-fv7gf" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.943436 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-stats-auth\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.943482 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bdn99\" (UID: \"36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.961223 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.971580 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bc93c92-2229-4d87-919d-d4104cf7bcab-serving-cert\") pod \"service-ca-operator-777779d784-cqgww\" (UID: \"1bc93c92-2229-4d87-919d-d4104cf7bcab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cqgww" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.980561 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 15:03:35 crc kubenswrapper[4893]: I0128 15:03:35.983455 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bc93c92-2229-4d87-919d-d4104cf7bcab-config\") pod \"service-ca-operator-777779d784-cqgww\" (UID: \"1bc93c92-2229-4d87-919d-d4104cf7bcab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cqgww" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.001114 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.020162 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.025173 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96a7f787-34e0-4f85-9db1-33722d80495c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nxd8x\" (UID: \"96a7f787-34e0-4f85-9db1-33722d80495c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.040971 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.045452 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96a7f787-34e0-4f85-9db1-33722d80495c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nxd8x\" (UID: \"96a7f787-34e0-4f85-9db1-33722d80495c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.062994 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.080900 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.101862 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.122317 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.141292 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.161212 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.181442 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.200530 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.221475 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.240058 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.260310 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.280724 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.301801 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.320763 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.326554 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7dc96ec-9b60-46a3-b120-5b75ba5e7124-metrics-tls\") pod \"dns-operator-744455d44c-5cr6t\" (UID: \"d7dc96ec-9b60-46a3-b120-5b75ba5e7124\") " pod="openshift-dns-operator/dns-operator-744455d44c-5cr6t" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.341394 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.361251 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.366122 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c158949d-4568-4cc2-8e24-8f5f24069664-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n2td9\" (UID: \"c158949d-4568-4cc2-8e24-8f5f24069664\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n2td9" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.381865 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.401379 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.420795 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.428489 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03627d33-2baf-4ffe-9af2-ad83eb61dd9c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gr6td\" (UID: \"03627d33-2baf-4ffe-9af2-ad83eb61dd9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.441105 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.461380 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.463087 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03627d33-2baf-4ffe-9af2-ad83eb61dd9c-config\") pod \"kube-apiserver-operator-766d6c64bb-gr6td\" (UID: \"03627d33-2baf-4ffe-9af2-ad83eb61dd9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.500886 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.520181 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.540825 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.545939 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec119cba-64e9-448f-8fa8-da55fd66884f-cert\") pod \"ingress-canary-79k8x\" (UID: \"ec119cba-64e9-448f-8fa8-da55fd66884f\") " pod="openshift-ingress-canary/ingress-canary-79k8x" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.564703 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.581231 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.600354 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.621093 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.641806 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.646646 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a587792-e86e-434f-873e-c7ce3aac8bce-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fzfvj\" (UID: \"9a587792-e86e-434f-873e-c7ce3aac8bce\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.670030 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.674875 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a587792-e86e-434f-873e-c7ce3aac8bce-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fzfvj\" (UID: \"9a587792-e86e-434f-873e-c7ce3aac8bce\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.681390 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.699181 4893 request.go:700] Waited for 1.000477721s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.701219 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.721642 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.741615 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.745381 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c30629f6-a476-415a-9fae-6c70598bd3c3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dz8b4\" (UID: \"c30629f6-a476-415a-9fae-6c70598bd3c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.761435 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.780755 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.800536 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.820944 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.841125 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.861463 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.875184 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1ab80115-9e4f-48a1-8c19-a89f554962cb-signing-key\") pod \"service-ca-9c57cc56f-fv7gf\" (UID: \"1ab80115-9e4f-48a1-8c19-a89f554962cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-fv7gf" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.881411 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.901297 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.905822 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1ab80115-9e4f-48a1-8c19-a89f554962cb-signing-cabundle\") pod \"service-ca-9c57cc56f-fv7gf\" (UID: \"1ab80115-9e4f-48a1-8c19-a89f554962cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-fv7gf" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.921265 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.942158 4893 secret.go:188] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.942174 4893 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.942228 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4e89991-7235-4188-8c4a-36d2dc3945f5-srv-cert podName:c4e89991-7235-4188-8c4a-36d2dc3945f5 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.442206181 +0000 UTC m=+135.215821209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c4e89991-7235-4188-8c4a-36d2dc3945f5-srv-cert") pod "olm-operator-6b444d44fb-r9mpt" (UID: "c4e89991-7235-4188-8c4a-36d2dc3945f5") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.942244 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-default-certificate podName:912dd730-f999-4811-bf47-485755b7d949 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.442237601 +0000 UTC m=+135.215852629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-default-certificate") pod "router-default-5444994796-xgk22" (UID: "912dd730-f999-4811-bf47-485755b7d949") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.942158 4893 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.942275 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-profile-collector-cert podName:7ba6c64d-c248-4150-93c7-5acf1fcbadfd nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.442270702 +0000 UTC m=+135.215885730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-profile-collector-cert") pod "catalog-operator-68c6474976-tkj4d" (UID: "7ba6c64d-c248-4150-93c7-5acf1fcbadfd") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.942296 4893 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.942317 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-srv-cert podName:7ba6c64d-c248-4150-93c7-5acf1fcbadfd nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.442312533 +0000 UTC m=+135.215927561 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-srv-cert") pod "catalog-operator-68c6474976-tkj4d" (UID: "7ba6c64d-c248-4150-93c7-5acf1fcbadfd") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.942337 4893 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.942358 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/912dd730-f999-4811-bf47-485755b7d949-service-ca-bundle podName:912dd730-f999-4811-bf47-485755b7d949 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.442352105 +0000 UTC m=+135.215967133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/912dd730-f999-4811-bf47-485755b7d949-service-ca-bundle") pod "router-default-5444994796-xgk22" (UID: "912dd730-f999-4811-bf47-485755b7d949") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.943557 4893 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.943575 4893 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.943591 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-metrics-certs podName:912dd730-f999-4811-bf47-485755b7d949 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.443584318 +0000 UTC m=+135.217199346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-metrics-certs") pod "router-default-5444994796-xgk22" (UID: "912dd730-f999-4811-bf47-485755b7d949") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.943607 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4e89991-7235-4188-8c4a-36d2dc3945f5-profile-collector-cert podName:c4e89991-7235-4188-8c4a-36d2dc3945f5 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.443597668 +0000 UTC m=+135.217212696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c4e89991-7235-4188-8c4a-36d2dc3945f5-profile-collector-cert") pod "olm-operator-6b444d44fb-r9mpt" (UID: "c4e89991-7235-4188-8c4a-36d2dc3945f5") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.943624 4893 secret.go:188] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.943630 4893 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.943649 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-stats-auth podName:912dd730-f999-4811-bf47-485755b7d949 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.443641659 +0000 UTC m=+135.217256687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-stats-auth") pod "router-default-5444994796-xgk22" (UID: "912dd730-f999-4811-bf47-485755b7d949") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.943659 4893 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.943659 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70e61761-82dd-4ac8-a847-1727769f4424-config-volume podName:70e61761-82dd-4ac8-a847-1727769f4424 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.44365448 +0000 UTC m=+135.217269508 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/70e61761-82dd-4ac8-a847-1727769f4424-config-volume") pod "collect-profiles-29493540-2f7hx" (UID: "70e61761-82dd-4ac8-a847-1727769f4424") : failed to sync configmap cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.943685 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70e61761-82dd-4ac8-a847-1727769f4424-secret-volume podName:70e61761-82dd-4ac8-a847-1727769f4424 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.44367913 +0000 UTC m=+135.217294158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-volume" (UniqueName: "kubernetes.io/secret/70e61761-82dd-4ac8-a847-1727769f4424-secret-volume") pod "collect-profiles-29493540-2f7hx" (UID: "70e61761-82dd-4ac8-a847-1727769f4424") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.944789 4893 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: E0128 15:03:36.944829 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6-control-plane-machine-set-operator-tls podName:36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.444821992 +0000 UTC m=+135.218437020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-bdn99" (UID: "36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6") : failed to sync secret cache: timed out waiting for the condition Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.971042 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.971175 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 15:03:36 crc kubenswrapper[4893]: I0128 15:03:36.980334 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.000939 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.021200 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.041900 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.061570 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.081188 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.100744 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.121078 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.141451 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.162828 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.201078 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.221307 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.240835 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.267281 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.281199 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.300578 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.320890 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.341016 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.360748 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.380670 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.401080 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.421034 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.440315 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.466595 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-srv-cert\") pod \"catalog-operator-68c6474976-tkj4d\" (UID: \"7ba6c64d-c248-4150-93c7-5acf1fcbadfd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.466668 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-profile-collector-cert\") pod \"catalog-operator-68c6474976-tkj4d\" (UID: \"7ba6c64d-c248-4150-93c7-5acf1fcbadfd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.466723 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c4e89991-7235-4188-8c4a-36d2dc3945f5-srv-cert\") pod \"olm-operator-6b444d44fb-r9mpt\" (UID: \"c4e89991-7235-4188-8c4a-36d2dc3945f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.466748 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-default-certificate\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.466776 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/912dd730-f999-4811-bf47-485755b7d949-service-ca-bundle\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.466913 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70e61761-82dd-4ac8-a847-1727769f4424-secret-volume\") pod \"collect-profiles-29493540-2f7hx\" (UID: \"70e61761-82dd-4ac8-a847-1727769f4424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.466952 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c4e89991-7235-4188-8c4a-36d2dc3945f5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-r9mpt\" (UID: \"c4e89991-7235-4188-8c4a-36d2dc3945f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.467040 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-metrics-certs\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.467091 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70e61761-82dd-4ac8-a847-1727769f4424-config-volume\") pod \"collect-profiles-29493540-2f7hx\" (UID: \"70e61761-82dd-4ac8-a847-1727769f4424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.467138 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-stats-auth\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.467167 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bdn99\" (UID: \"36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.468343 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70e61761-82dd-4ac8-a847-1727769f4424-config-volume\") pod \"collect-profiles-29493540-2f7hx\" (UID: \"70e61761-82dd-4ac8-a847-1727769f4424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.468932 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/912dd730-f999-4811-bf47-485755b7d949-service-ca-bundle\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.470729 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-default-certificate\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.470915 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70e61761-82dd-4ac8-a847-1727769f4424-secret-volume\") pod \"collect-profiles-29493540-2f7hx\" (UID: \"70e61761-82dd-4ac8-a847-1727769f4424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.471176 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-stats-auth\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.471355 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bdn99\" (UID: \"36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.471973 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/912dd730-f999-4811-bf47-485755b7d949-metrics-certs\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.473201 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c4e89991-7235-4188-8c4a-36d2dc3945f5-srv-cert\") pod \"olm-operator-6b444d44fb-r9mpt\" (UID: \"c4e89991-7235-4188-8c4a-36d2dc3945f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.474068 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c4e89991-7235-4188-8c4a-36d2dc3945f5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-r9mpt\" (UID: \"c4e89991-7235-4188-8c4a-36d2dc3945f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.475335 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9f781b6-b4dc-428e-a4b5-c0edca799be2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-lcqnw\" (UID: \"c9f781b6-b4dc-428e-a4b5-c0edca799be2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.482660 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-srv-cert\") pod \"catalog-operator-68c6474976-tkj4d\" (UID: \"7ba6c64d-c248-4150-93c7-5acf1fcbadfd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.483356 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-profile-collector-cert\") pod \"catalog-operator-68c6474976-tkj4d\" (UID: \"7ba6c64d-c248-4150-93c7-5acf1fcbadfd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.501780 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcr6g\" (UniqueName: \"kubernetes.io/projected/c9f781b6-b4dc-428e-a4b5-c0edca799be2-kube-api-access-gcr6g\") pod \"cluster-image-registry-operator-dc59b4c8b-lcqnw\" (UID: \"c9f781b6-b4dc-428e-a4b5-c0edca799be2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.520514 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c58e09f-229a-41a8-814f-d2d919d706f6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-r6dw8\" (UID: \"9c58e09f-229a-41a8-814f-d2d919d706f6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.535626 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klw2s\" (UniqueName: \"kubernetes.io/projected/e1399bb5-4202-4d0e-aac3-83bec9d52d2d-kube-api-access-klw2s\") pod \"downloads-7954f5f757-z2gjc\" (UID: \"e1399bb5-4202-4d0e-aac3-83bec9d52d2d\") " pod="openshift-console/downloads-7954f5f757-z2gjc" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.555750 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rjts\" (UniqueName: \"kubernetes.io/projected/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-kube-api-access-6rjts\") pod \"console-f9d7485db-vzxzx\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.581251 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj2rl\" (UniqueName: \"kubernetes.io/projected/65b08b40-b2e6-4db4-8cb1-14a48a144f3b-kube-api-access-bj2rl\") pod \"openshift-apiserver-operator-796bbdcf4f-hltf8\" (UID: \"65b08b40-b2e6-4db4-8cb1-14a48a144f3b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.587299 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-z2gjc" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.598751 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z47lw\" (UniqueName: \"kubernetes.io/projected/7fbb0ada-30b2-4b03-bb9a-456f07e78a42-kube-api-access-z47lw\") pod \"console-operator-58897d9998-8jcmm\" (UID: \"7fbb0ada-30b2-4b03-bb9a-456f07e78a42\") " pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.599900 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.624938 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxmrq\" (UniqueName: \"kubernetes.io/projected/3a430c60-e09a-473a-8938-c6e67c6fe89f-kube-api-access-hxmrq\") pod \"packageserver-d55dfcdfc-2zfnn\" (UID: \"3a430c60-e09a-473a-8938-c6e67c6fe89f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.640748 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvdlk\" (UniqueName: \"kubernetes.io/projected/285eb7ab-eacb-482f-bafb-45871026d2b1-kube-api-access-lvdlk\") pod \"openshift-controller-manager-operator-756b6f6bc6-qvscx\" (UID: \"285eb7ab-eacb-482f-bafb-45871026d2b1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.655851 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwnb8\" (UniqueName: \"kubernetes.io/projected/81d03df1-14b4-4475-944e-bf81e7abca38-kube-api-access-wwnb8\") pod \"authentication-operator-69f744f599-sfkds\" (UID: \"81d03df1-14b4-4475-944e-bf81e7abca38\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.664239 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.685461 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz86w\" (UniqueName: \"kubernetes.io/projected/d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d-kube-api-access-qz86w\") pod \"cluster-samples-operator-665b6dd947-nhwwk\" (UID: \"d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.686514 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.696925 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.699992 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttfxr\" (UniqueName: \"kubernetes.io/projected/5ea57229-2fa9-47b3-a2f1-6c28d9434923-kube-api-access-ttfxr\") pod \"apiserver-76f77b778f-vd8ml\" (UID: \"5ea57229-2fa9-47b3-a2f1-6c28d9434923\") " pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.704699 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.719189 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v6hf\" (UniqueName: \"kubernetes.io/projected/5b6faa0a-407c-485c-9d10-0ed877cdfe30-kube-api-access-9v6hf\") pod \"machine-approver-56656f9798-dqcjb\" (UID: \"5b6faa0a-407c-485c-9d10-0ed877cdfe30\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.719846 4893 request.go:700] Waited for 1.901631901s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.730800 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.737672 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7r8x\" (UniqueName: \"kubernetes.io/projected/feaf053e-d992-479b-b7ac-f7383e0b4b35-kube-api-access-k7r8x\") pod \"controller-manager-879f6c89f-6q42k\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.761339 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x7w9\" (UniqueName: \"kubernetes.io/projected/cc9b874e-9d92-4b60-affa-24d0f2286cb8-kube-api-access-4x7w9\") pod \"openshift-config-operator-7777fb866f-6wtc2\" (UID: \"cc9b874e-9d92-4b60-affa-24d0f2286cb8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.778377 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7p7z\" (UniqueName: \"kubernetes.io/projected/64a0f7cc-6a3a-4604-a964-6fbd123e4d24-kube-api-access-x7p7z\") pod \"machine-config-controller-84d6567774-kvxbc\" (UID: \"64a0f7cc-6a3a-4604-a964-6fbd123e4d24\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.811610 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fmtw\" (UniqueName: \"kubernetes.io/projected/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-kube-api-access-5fmtw\") pod \"oauth-openshift-558db77b4-zgw9r\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.815928 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqq8s\" (UniqueName: \"kubernetes.io/projected/ca41e21f-75c8-48bc-8611-85bebde78fad-kube-api-access-fqq8s\") pod \"apiserver-7bbb656c7d-wf8nw\" (UID: \"ca41e21f-75c8-48bc-8611-85bebde78fad\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.824850 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.839165 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.849068 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.850220 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.854971 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tphb\" (UniqueName: \"kubernetes.io/projected/b5a371e6-d5dc-4971-8abf-c193da52013c-kube-api-access-8tphb\") pod \"route-controller-manager-6576b87f9c-nj5sn\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.858809 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.859482 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-z2gjc"] Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.859755 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw"] Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.866845 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.867090 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.875324 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.882811 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.914913 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clmhk\" (UniqueName: \"kubernetes.io/projected/ef3c4a5f-725d-4be0-b800-ab95fba9e33e-kube-api-access-clmhk\") pod \"machine-api-operator-5694c8668f-8ppbb\" (UID: \"ef3c4a5f-725d-4be0-b800-ab95fba9e33e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.922021 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.927849 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8jcmm"] Jan 28 15:03:37 crc kubenswrapper[4893]: W0128 15:03:37.932235 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b6faa0a_407c_485c_9d10_0ed877cdfe30.slice/crio-309c250f005a342e2e4f7155d8ffa212cd5934c7b8f4d4dbda59d1307c43eb11 WatchSource:0}: Error finding container 309c250f005a342e2e4f7155d8ffa212cd5934c7b8f4d4dbda59d1307c43eb11: Status 404 returned error can't find the container with id 309c250f005a342e2e4f7155d8ffa212cd5934c7b8f4d4dbda59d1307c43eb11 Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.939327 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.940827 4893 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.961260 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.963655 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.970802 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" Jan 28 15:03:37 crc kubenswrapper[4893]: W0128 15:03:37.975879 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fbb0ada_30b2_4b03_bb9a_456f07e78a42.slice/crio-77f8c2aa4555978e44d3dc35614eeb3ac6924dca82138818271e825c6d35bc4f WatchSource:0}: Error finding container 77f8c2aa4555978e44d3dc35614eeb3ac6924dca82138818271e825c6d35bc4f: Status 404 returned error can't find the container with id 77f8c2aa4555978e44d3dc35614eeb3ac6924dca82138818271e825c6d35bc4f Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.981230 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" Jan 28 15:03:37 crc kubenswrapper[4893]: I0128 15:03:37.998944 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8"] Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.016804 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25rn8\" (UniqueName: \"kubernetes.io/projected/c158949d-4568-4cc2-8e24-8f5f24069664-kube-api-access-25rn8\") pod \"multus-admission-controller-857f4d67dd-n2td9\" (UID: \"c158949d-4568-4cc2-8e24-8f5f24069664\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n2td9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.020561 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmnxp\" (UniqueName: \"kubernetes.io/projected/1bc93c92-2229-4d87-919d-d4104cf7bcab-kube-api-access-qmnxp\") pod \"service-ca-operator-777779d784-cqgww\" (UID: \"1bc93c92-2229-4d87-919d-d4104cf7bcab\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cqgww" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.049358 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n2td9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.059637 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.098075 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.110795 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhc76\" (UniqueName: \"kubernetes.io/projected/c30629f6-a476-415a-9fae-6c70598bd3c3-kube-api-access-fhc76\") pod \"package-server-manager-789f6589d5-dz8b4\" (UID: \"c30629f6-a476-415a-9fae-6c70598bd3c3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.115033 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tdch\" (UniqueName: \"kubernetes.io/projected/36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6-kube-api-access-5tdch\") pod \"control-plane-machine-set-operator-78cbb6b69f-bdn99\" (UID: \"36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.115985 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq6sf\" (UniqueName: \"kubernetes.io/projected/96a7f787-34e0-4f85-9db1-33722d80495c-kube-api-access-dq6sf\") pod \"kube-storage-version-migrator-operator-b67b599dd-nxd8x\" (UID: \"96a7f787-34e0-4f85-9db1-33722d80495c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.127233 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcdwb\" (UniqueName: \"kubernetes.io/projected/9a587792-e86e-434f-873e-c7ce3aac8bce-kube-api-access-zcdwb\") pod \"marketplace-operator-79b997595-fzfvj\" (UID: \"9a587792-e86e-434f-873e-c7ce3aac8bce\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.134797 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm5nz\" (UniqueName: \"kubernetes.io/projected/7ba6c64d-c248-4150-93c7-5acf1fcbadfd-kube-api-access-sm5nz\") pod \"catalog-operator-68c6474976-tkj4d\" (UID: \"7ba6c64d-c248-4150-93c7-5acf1fcbadfd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.141560 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn7vn\" (UniqueName: \"kubernetes.io/projected/70e61761-82dd-4ac8-a847-1727769f4424-kube-api-access-nn7vn\") pod \"collect-profiles-29493540-2f7hx\" (UID: \"70e61761-82dd-4ac8-a847-1727769f4424\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.142992 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.150832 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.175512 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:38 crc kubenswrapper[4893]: W0128 15:03:38.175915 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c58e09f_229a_41a8_814f_d2d919d706f6.slice/crio-6f8efe2e639aa7d5a9b62f568be0dee3027343e7599c059af5c347213e76635d WatchSource:0}: Error finding container 6f8efe2e639aa7d5a9b62f568be0dee3027343e7599c059af5c347213e76635d: Status 404 returned error can't find the container with id 6f8efe2e639aa7d5a9b62f568be0dee3027343e7599c059af5c347213e76635d Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.179426 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc8zh\" (UniqueName: \"kubernetes.io/projected/73c142b0-ef25-4567-a816-965a127760af-kube-api-access-jc8zh\") pod \"migrator-59844c95c7-vh5rz\" (UID: \"73c142b0-ef25-4567-a816-965a127760af\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vh5rz" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.182197 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4v24\" (UniqueName: \"kubernetes.io/projected/ec119cba-64e9-448f-8fa8-da55fd66884f-kube-api-access-r4v24\") pod \"ingress-canary-79k8x\" (UID: \"ec119cba-64e9-448f-8fa8-da55fd66884f\") " pod="openshift-ingress-canary/ingress-canary-79k8x" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.199414 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76kvv\" (UniqueName: \"kubernetes.io/projected/912dd730-f999-4811-bf47-485755b7d949-kube-api-access-76kvv\") pod \"router-default-5444994796-xgk22\" (UID: \"912dd730-f999-4811-bf47-485755b7d949\") " pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.226040 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/03627d33-2baf-4ffe-9af2-ad83eb61dd9c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gr6td\" (UID: \"03627d33-2baf-4ffe-9af2-ad83eb61dd9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.237919 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f8hw\" (UniqueName: \"kubernetes.io/projected/d7dc96ec-9b60-46a3-b120-5b75ba5e7124-kube-api-access-9f8hw\") pod \"dns-operator-744455d44c-5cr6t\" (UID: \"d7dc96ec-9b60-46a3-b120-5b75ba5e7124\") " pod="openshift-dns-operator/dns-operator-744455d44c-5cr6t" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.260494 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj295\" (UniqueName: \"kubernetes.io/projected/c4e89991-7235-4188-8c4a-36d2dc3945f5-kube-api-access-mj295\") pod \"olm-operator-6b444d44fb-r9mpt\" (UID: \"c4e89991-7235-4188-8c4a-36d2dc3945f5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.285391 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2x8x\" (UniqueName: \"kubernetes.io/projected/1ab80115-9e4f-48a1-8c19-a89f554962cb-kube-api-access-x2x8x\") pod \"service-ca-9c57cc56f-fv7gf\" (UID: \"1ab80115-9e4f-48a1-8c19-a89f554962cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-fv7gf" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.335079 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8"] Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.342295 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx"] Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.363798 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn"] Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.365191 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vzxzx"] Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.398163 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cqgww" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.398860 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5cr6t" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.402096 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.398936 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.398996 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-79k8x" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.400136 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.408313 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vh5rz" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.414894 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e54303a1-baec-46eb-92e9-9beeca76bb98-registry-certificates\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.415054 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/82b04f86-ec41-4af7-9f43-02928feaabd8-etcd-client\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.415091 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5282\" (UniqueName: \"kubernetes.io/projected/82b04f86-ec41-4af7-9f43-02928feaabd8-kube-api-access-t5282\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.415193 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/82b04f86-ec41-4af7-9f43-02928feaabd8-etcd-ca\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.415270 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e54303a1-baec-46eb-92e9-9beeca76bb98-trusted-ca\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.415370 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/82b04f86-ec41-4af7-9f43-02928feaabd8-etcd-service-ca\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.416744 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-registry-tls\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.416923 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.418104 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-bound-sa-token\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.418136 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb8b8327-6a52-41f7-b512-f6572f06c3c4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mvn69\" (UID: \"fb8b8327-6a52-41f7-b512-f6572f06c3c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.418166 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fb8b8327-6a52-41f7-b512-f6572f06c3c4-images\") pod \"machine-config-operator-74547568cd-mvn69\" (UID: \"fb8b8327-6a52-41f7-b512-f6572f06c3c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.418293 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e54303a1-baec-46eb-92e9-9beeca76bb98-ca-trust-extracted\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.418554 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82b04f86-ec41-4af7-9f43-02928feaabd8-config\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.419979 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e54303a1-baec-46eb-92e9-9beeca76bb98-installation-pull-secrets\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.420842 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb8b8327-6a52-41f7-b512-f6572f06c3c4-proxy-tls\") pod \"machine-config-operator-74547568cd-mvn69\" (UID: \"fb8b8327-6a52-41f7-b512-f6572f06c3c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.421475 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46htn\" (UniqueName: \"kubernetes.io/projected/fb8b8327-6a52-41f7-b512-f6572f06c3c4-kube-api-access-46htn\") pod \"machine-config-operator-74547568cd-mvn69\" (UID: \"fb8b8327-6a52-41f7-b512-f6572f06c3c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.421819 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb9pb\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-kube-api-access-sb9pb\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.422150 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.422348 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82b04f86-ec41-4af7-9f43-02928feaabd8-serving-cert\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.423232 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fv7gf" Jan 28 15:03:38 crc kubenswrapper[4893]: E0128 15:03:38.425015 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:38.924993435 +0000 UTC m=+136.698608463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.434895 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.457180 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.484591 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6q42k"] Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.516239 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk"] Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.522119 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zgw9r"] Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.523912 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524117 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-registry-tls\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524205 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bcwx\" (UniqueName: \"kubernetes.io/projected/4f1e2d4c-d68d-4905-ac09-97eead457a6a-kube-api-access-4bcwx\") pod \"ingress-operator-5b745b69d9-jmvhq\" (UID: \"4f1e2d4c-d68d-4905-ac09-97eead457a6a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524235 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/803a27b8-0f88-47f7-b1aa-81f57e6c7238-metrics-tls\") pod \"dns-default-d5fwk\" (UID: \"803a27b8-0f88-47f7-b1aa-81f57e6c7238\") " pod="openshift-dns/dns-default-d5fwk" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524257 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-bound-sa-token\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524280 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb8b8327-6a52-41f7-b512-f6572f06c3c4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mvn69\" (UID: \"fb8b8327-6a52-41f7-b512-f6572f06c3c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524302 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e54303a1-baec-46eb-92e9-9beeca76bb98-ca-trust-extracted\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524324 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fb8b8327-6a52-41f7-b512-f6572f06c3c4-images\") pod \"machine-config-operator-74547568cd-mvn69\" (UID: \"fb8b8327-6a52-41f7-b512-f6572f06c3c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524367 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad98500b-dd9c-4691-9a0b-0e157e32d90d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4zlj2\" (UID: \"ad98500b-dd9c-4691-9a0b-0e157e32d90d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524391 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d113579e-5e8a-4d1e-a4db-a739dd0ab66c-node-bootstrap-token\") pod \"machine-config-server-stl87\" (UID: \"d113579e-5e8a-4d1e-a4db-a739dd0ab66c\") " pod="openshift-machine-config-operator/machine-config-server-stl87" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524515 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfb45\" (UniqueName: \"kubernetes.io/projected/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-kube-api-access-wfb45\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524544 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82b04f86-ec41-4af7-9f43-02928feaabd8-config\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524618 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xww5g\" (UniqueName: \"kubernetes.io/projected/d113579e-5e8a-4d1e-a4db-a739dd0ab66c-kube-api-access-xww5g\") pod \"machine-config-server-stl87\" (UID: \"d113579e-5e8a-4d1e-a4db-a739dd0ab66c\") " pod="openshift-machine-config-operator/machine-config-server-stl87" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524642 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-socket-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524663 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-csi-data-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524769 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e54303a1-baec-46eb-92e9-9beeca76bb98-installation-pull-secrets\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524796 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb8b8327-6a52-41f7-b512-f6572f06c3c4-proxy-tls\") pod \"machine-config-operator-74547568cd-mvn69\" (UID: \"fb8b8327-6a52-41f7-b512-f6572f06c3c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.524921 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46htn\" (UniqueName: \"kubernetes.io/projected/fb8b8327-6a52-41f7-b512-f6572f06c3c4-kube-api-access-46htn\") pod \"machine-config-operator-74547568cd-mvn69\" (UID: \"fb8b8327-6a52-41f7-b512-f6572f06c3c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" Jan 28 15:03:38 crc kubenswrapper[4893]: E0128 15:03:38.525102 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.025074767 +0000 UTC m=+136.798689855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.526006 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82b04f86-ec41-4af7-9f43-02928feaabd8-config\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.526328 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb9pb\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-kube-api-access-sb9pb\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.526441 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fb8b8327-6a52-41f7-b512-f6572f06c3c4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mvn69\" (UID: \"fb8b8327-6a52-41f7-b512-f6572f06c3c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.526541 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: E0128 15:03:38.526891 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.026876046 +0000 UTC m=+136.800491174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.527641 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82b04f86-ec41-4af7-9f43-02928feaabd8-serving-cert\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.528123 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4f1e2d4c-d68d-4905-ac09-97eead457a6a-metrics-tls\") pod \"ingress-operator-5b745b69d9-jmvhq\" (UID: \"4f1e2d4c-d68d-4905-ac09-97eead457a6a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.528578 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e54303a1-baec-46eb-92e9-9beeca76bb98-ca-trust-extracted\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.529671 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d113579e-5e8a-4d1e-a4db-a739dd0ab66c-certs\") pod \"machine-config-server-stl87\" (UID: \"d113579e-5e8a-4d1e-a4db-a739dd0ab66c\") " pod="openshift-machine-config-operator/machine-config-server-stl87" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.529713 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-mountpoint-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.529769 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e54303a1-baec-46eb-92e9-9beeca76bb98-registry-certificates\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.529815 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-registration-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.529850 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f1e2d4c-d68d-4905-ac09-97eead457a6a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jmvhq\" (UID: \"4f1e2d4c-d68d-4905-ac09-97eead457a6a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.529877 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad98500b-dd9c-4691-9a0b-0e157e32d90d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4zlj2\" (UID: \"ad98500b-dd9c-4691-9a0b-0e157e32d90d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.529989 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/82b04f86-ec41-4af7-9f43-02928feaabd8-etcd-client\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.530028 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5282\" (UniqueName: \"kubernetes.io/projected/82b04f86-ec41-4af7-9f43-02928feaabd8-kube-api-access-t5282\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.530186 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-plugins-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.530244 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/82b04f86-ec41-4af7-9f43-02928feaabd8-etcd-ca\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.531223 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e54303a1-baec-46eb-92e9-9beeca76bb98-trusted-ca\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.531277 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad98500b-dd9c-4691-9a0b-0e157e32d90d-config\") pod \"kube-controller-manager-operator-78b949d7b-4zlj2\" (UID: \"ad98500b-dd9c-4691-9a0b-0e157e32d90d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.531400 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e54303a1-baec-46eb-92e9-9beeca76bb98-registry-certificates\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.531652 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e54303a1-baec-46eb-92e9-9beeca76bb98-installation-pull-secrets\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.531838 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddhwn\" (UniqueName: \"kubernetes.io/projected/803a27b8-0f88-47f7-b1aa-81f57e6c7238-kube-api-access-ddhwn\") pod \"dns-default-d5fwk\" (UID: \"803a27b8-0f88-47f7-b1aa-81f57e6c7238\") " pod="openshift-dns/dns-default-d5fwk" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.531840 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb8b8327-6a52-41f7-b512-f6572f06c3c4-proxy-tls\") pod \"machine-config-operator-74547568cd-mvn69\" (UID: \"fb8b8327-6a52-41f7-b512-f6572f06c3c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.531915 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/82b04f86-ec41-4af7-9f43-02928feaabd8-etcd-service-ca\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.532917 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e54303a1-baec-46eb-92e9-9beeca76bb98-trusted-ca\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.532945 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/82b04f86-ec41-4af7-9f43-02928feaabd8-etcd-service-ca\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.533268 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/82b04f86-ec41-4af7-9f43-02928feaabd8-etcd-ca\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.533643 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f1e2d4c-d68d-4905-ac09-97eead457a6a-trusted-ca\") pod \"ingress-operator-5b745b69d9-jmvhq\" (UID: \"4f1e2d4c-d68d-4905-ac09-97eead457a6a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.533793 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/803a27b8-0f88-47f7-b1aa-81f57e6c7238-config-volume\") pod \"dns-default-d5fwk\" (UID: \"803a27b8-0f88-47f7-b1aa-81f57e6c7238\") " pod="openshift-dns/dns-default-d5fwk" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.535539 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/82b04f86-ec41-4af7-9f43-02928feaabd8-etcd-client\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.536155 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fb8b8327-6a52-41f7-b512-f6572f06c3c4-images\") pod \"machine-config-operator-74547568cd-mvn69\" (UID: \"fb8b8327-6a52-41f7-b512-f6572f06c3c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.538295 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-registry-tls\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.539504 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82b04f86-ec41-4af7-9f43-02928feaabd8-serving-cert\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.557643 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-bound-sa-token\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.577204 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46htn\" (UniqueName: \"kubernetes.io/projected/fb8b8327-6a52-41f7-b512-f6572f06c3c4-kube-api-access-46htn\") pod \"machine-config-operator-74547568cd-mvn69\" (UID: \"fb8b8327-6a52-41f7-b512-f6572f06c3c4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.602899 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sfkds"] Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.618184 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb9pb\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-kube-api-access-sb9pb\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.628290 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5282\" (UniqueName: \"kubernetes.io/projected/82b04f86-ec41-4af7-9f43-02928feaabd8-kube-api-access-t5282\") pod \"etcd-operator-b45778765-gnmz9\" (UID: \"82b04f86-ec41-4af7-9f43-02928feaabd8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.632246 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.634339 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:38 crc kubenswrapper[4893]: E0128 15:03:38.634575 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.134521056 +0000 UTC m=+136.908136084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.634771 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.634845 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4f1e2d4c-d68d-4905-ac09-97eead457a6a-metrics-tls\") pod \"ingress-operator-5b745b69d9-jmvhq\" (UID: \"4f1e2d4c-d68d-4905-ac09-97eead457a6a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.634879 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d113579e-5e8a-4d1e-a4db-a739dd0ab66c-certs\") pod \"machine-config-server-stl87\" (UID: \"d113579e-5e8a-4d1e-a4db-a739dd0ab66c\") " pod="openshift-machine-config-operator/machine-config-server-stl87" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.634902 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-mountpoint-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.634925 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-registration-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.634948 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f1e2d4c-d68d-4905-ac09-97eead457a6a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jmvhq\" (UID: \"4f1e2d4c-d68d-4905-ac09-97eead457a6a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.634969 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad98500b-dd9c-4691-9a0b-0e157e32d90d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4zlj2\" (UID: \"ad98500b-dd9c-4691-9a0b-0e157e32d90d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.635012 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-plugins-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.635042 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad98500b-dd9c-4691-9a0b-0e157e32d90d-config\") pod \"kube-controller-manager-operator-78b949d7b-4zlj2\" (UID: \"ad98500b-dd9c-4691-9a0b-0e157e32d90d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.635066 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddhwn\" (UniqueName: \"kubernetes.io/projected/803a27b8-0f88-47f7-b1aa-81f57e6c7238-kube-api-access-ddhwn\") pod \"dns-default-d5fwk\" (UID: \"803a27b8-0f88-47f7-b1aa-81f57e6c7238\") " pod="openshift-dns/dns-default-d5fwk" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.635103 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f1e2d4c-d68d-4905-ac09-97eead457a6a-trusted-ca\") pod \"ingress-operator-5b745b69d9-jmvhq\" (UID: \"4f1e2d4c-d68d-4905-ac09-97eead457a6a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.635128 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/803a27b8-0f88-47f7-b1aa-81f57e6c7238-config-volume\") pod \"dns-default-d5fwk\" (UID: \"803a27b8-0f88-47f7-b1aa-81f57e6c7238\") " pod="openshift-dns/dns-default-d5fwk" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.635167 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bcwx\" (UniqueName: \"kubernetes.io/projected/4f1e2d4c-d68d-4905-ac09-97eead457a6a-kube-api-access-4bcwx\") pod \"ingress-operator-5b745b69d9-jmvhq\" (UID: \"4f1e2d4c-d68d-4905-ac09-97eead457a6a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.635188 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/803a27b8-0f88-47f7-b1aa-81f57e6c7238-metrics-tls\") pod \"dns-default-d5fwk\" (UID: \"803a27b8-0f88-47f7-b1aa-81f57e6c7238\") " pod="openshift-dns/dns-default-d5fwk" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.635216 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad98500b-dd9c-4691-9a0b-0e157e32d90d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4zlj2\" (UID: \"ad98500b-dd9c-4691-9a0b-0e157e32d90d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.635264 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d113579e-5e8a-4d1e-a4db-a739dd0ab66c-node-bootstrap-token\") pod \"machine-config-server-stl87\" (UID: \"d113579e-5e8a-4d1e-a4db-a739dd0ab66c\") " pod="openshift-machine-config-operator/machine-config-server-stl87" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.635291 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfb45\" (UniqueName: \"kubernetes.io/projected/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-kube-api-access-wfb45\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.635320 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xww5g\" (UniqueName: \"kubernetes.io/projected/d113579e-5e8a-4d1e-a4db-a739dd0ab66c-kube-api-access-xww5g\") pod \"machine-config-server-stl87\" (UID: \"d113579e-5e8a-4d1e-a4db-a739dd0ab66c\") " pod="openshift-machine-config-operator/machine-config-server-stl87" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.635341 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-socket-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.635364 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-csi-data-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.635560 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-csi-data-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: E0128 15:03:38.635892 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.135876263 +0000 UTC m=+136.909491291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.636741 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-mountpoint-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.637472 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-registration-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.637567 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-plugins-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.640409 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad98500b-dd9c-4691-9a0b-0e157e32d90d-config\") pod \"kube-controller-manager-operator-78b949d7b-4zlj2\" (UID: \"ad98500b-dd9c-4691-9a0b-0e157e32d90d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.641259 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/803a27b8-0f88-47f7-b1aa-81f57e6c7238-config-volume\") pod \"dns-default-d5fwk\" (UID: \"803a27b8-0f88-47f7-b1aa-81f57e6c7238\") " pod="openshift-dns/dns-default-d5fwk" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.641589 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-socket-dir\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.644334 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d113579e-5e8a-4d1e-a4db-a739dd0ab66c-certs\") pod \"machine-config-server-stl87\" (UID: \"d113579e-5e8a-4d1e-a4db-a739dd0ab66c\") " pod="openshift-machine-config-operator/machine-config-server-stl87" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.646456 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/803a27b8-0f88-47f7-b1aa-81f57e6c7238-metrics-tls\") pod \"dns-default-d5fwk\" (UID: \"803a27b8-0f88-47f7-b1aa-81f57e6c7238\") " pod="openshift-dns/dns-default-d5fwk" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.652235 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d113579e-5e8a-4d1e-a4db-a739dd0ab66c-node-bootstrap-token\") pod \"machine-config-server-stl87\" (UID: \"d113579e-5e8a-4d1e-a4db-a739dd0ab66c\") " pod="openshift-machine-config-operator/machine-config-server-stl87" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.661317 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f1e2d4c-d68d-4905-ac09-97eead457a6a-trusted-ca\") pod \"ingress-operator-5b745b69d9-jmvhq\" (UID: \"4f1e2d4c-d68d-4905-ac09-97eead457a6a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.661763 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad98500b-dd9c-4691-9a0b-0e157e32d90d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4zlj2\" (UID: \"ad98500b-dd9c-4691-9a0b-0e157e32d90d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.667984 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4f1e2d4c-d68d-4905-ac09-97eead457a6a-metrics-tls\") pod \"ingress-operator-5b745b69d9-jmvhq\" (UID: \"4f1e2d4c-d68d-4905-ac09-97eead457a6a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.681798 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4f1e2d4c-d68d-4905-ac09-97eead457a6a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jmvhq\" (UID: \"4f1e2d4c-d68d-4905-ac09-97eead457a6a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" Jan 28 15:03:38 crc kubenswrapper[4893]: W0128 15:03:38.714513 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17fc2b8f_01ad_426d_9dfa_4531ac3ff28e.slice/crio-9dc84e4bc4f50bc0c9bf471442861127c14f9d2270653181d414466feefd8f6e WatchSource:0}: Error finding container 9dc84e4bc4f50bc0c9bf471442861127c14f9d2270653181d414466feefd8f6e: Status 404 returned error can't find the container with id 9dc84e4bc4f50bc0c9bf471442861127c14f9d2270653181d414466feefd8f6e Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.715751 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.722807 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-z2gjc" event={"ID":"e1399bb5-4202-4d0e-aac3-83bec9d52d2d","Type":"ContainerStarted","Data":"33fa83199cc7897ad783a9d841e46883097d7f0938abaa9620456048273a708a"} Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.722870 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-z2gjc" event={"ID":"e1399bb5-4202-4d0e-aac3-83bec9d52d2d","Type":"ContainerStarted","Data":"970c9637b7942ac1b32f4436d1729233818007a6f400a51ba54507f155e6917c"} Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.723317 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bcwx\" (UniqueName: \"kubernetes.io/projected/4f1e2d4c-d68d-4905-ac09-97eead457a6a-kube-api-access-4bcwx\") pod \"ingress-operator-5b745b69d9-jmvhq\" (UID: \"4f1e2d4c-d68d-4905-ac09-97eead457a6a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.723735 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-z2gjc" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.726793 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8jcmm" event={"ID":"7fbb0ada-30b2-4b03-bb9a-456f07e78a42","Type":"ContainerStarted","Data":"b97ea8ac9c412c185e1bfb79a02bd55b3ae0751ad6d08a3604330994fe4e4e7b"} Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.726827 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8jcmm" event={"ID":"7fbb0ada-30b2-4b03-bb9a-456f07e78a42","Type":"ContainerStarted","Data":"77f8c2aa4555978e44d3dc35614eeb3ac6924dca82138818271e825c6d35bc4f"} Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.728171 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.729821 4893 patch_prober.go:28] interesting pod/console-operator-58897d9998-8jcmm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.729861 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8jcmm" podUID="7fbb0ada-30b2-4b03-bb9a-456f07e78a42" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.729962 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" event={"ID":"3a430c60-e09a-473a-8938-c6e67c6fe89f","Type":"ContainerStarted","Data":"edcd05b3a549a07f1790ca73a81e7cd989d670e3e1c80b9a9c071c8a273a836f"} Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.731404 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.731433 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.733216 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" event={"ID":"c9f781b6-b4dc-428e-a4b5-c0edca799be2","Type":"ContainerStarted","Data":"268d81baeeb7ed1ce1c6f44875f5efd8d9d6bf9937dca2d73c427f1b5e7f3e9f"} Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.733300 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" event={"ID":"c9f781b6-b4dc-428e-a4b5-c0edca799be2","Type":"ContainerStarted","Data":"017e34652926f37449458beaf6175f06fc98b5a39d46bade2ad52f489d47adb1"} Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.736053 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:38 crc kubenswrapper[4893]: E0128 15:03:38.736502 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.236461808 +0000 UTC m=+137.010076836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.736823 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddhwn\" (UniqueName: \"kubernetes.io/projected/803a27b8-0f88-47f7-b1aa-81f57e6c7238-kube-api-access-ddhwn\") pod \"dns-default-d5fwk\" (UID: \"803a27b8-0f88-47f7-b1aa-81f57e6c7238\") " pod="openshift-dns/dns-default-d5fwk" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.737738 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" event={"ID":"5b6faa0a-407c-485c-9d10-0ed877cdfe30","Type":"ContainerStarted","Data":"b6f03d7c6cebef277e5624100440c31c1420ec09389c478444e15c5d361d1c47"} Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.737781 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" event={"ID":"5b6faa0a-407c-485c-9d10-0ed877cdfe30","Type":"ContainerStarted","Data":"309c250f005a342e2e4f7155d8ffa212cd5934c7b8f4d4dbda59d1307c43eb11"} Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.741740 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vzxzx" event={"ID":"7d249efd-e40b-430f-98ec-9ad9c4e5cf70","Type":"ContainerStarted","Data":"94feddb56e2310a0d9a4fef68d89c33484f975433a10e263ce11830ea8a9699b"} Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.745605 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx" event={"ID":"285eb7ab-eacb-482f-bafb-45871026d2b1","Type":"ContainerStarted","Data":"19bff0c593ff0895cd82af4ab13269c0b23a624663f334cd2d7b142b8ecebca9"} Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.748363 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8" event={"ID":"65b08b40-b2e6-4db4-8cb1-14a48a144f3b","Type":"ContainerStarted","Data":"eea3cca1aed1a5b3bd0920f9747638dfcbacc8528721e5965e87337e63d009c8"} Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.750742 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8" event={"ID":"9c58e09f-229a-41a8-814f-d2d919d706f6","Type":"ContainerStarted","Data":"6f8efe2e639aa7d5a9b62f568be0dee3027343e7599c059af5c347213e76635d"} Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.755041 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad98500b-dd9c-4691-9a0b-0e157e32d90d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4zlj2\" (UID: \"ad98500b-dd9c-4691-9a0b-0e157e32d90d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.755904 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfb45\" (UniqueName: \"kubernetes.io/projected/6bd4d6ea-438a-43d5-a137-f14a6c8d75f9-kube-api-access-wfb45\") pod \"csi-hostpathplugin-n2xxg\" (UID: \"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9\") " pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.766667 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.777658 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xww5g\" (UniqueName: \"kubernetes.io/projected/d113579e-5e8a-4d1e-a4db-a739dd0ab66c-kube-api-access-xww5g\") pod \"machine-config-server-stl87\" (UID: \"d113579e-5e8a-4d1e-a4db-a739dd0ab66c\") " pod="openshift-machine-config-operator/machine-config-server-stl87" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.782427 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.790027 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-stl87" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.799918 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-d5fwk" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.812052 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.838785 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: E0128 15:03:38.839857 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.339840951 +0000 UTC m=+137.113455979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.933476 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw"] Jan 28 15:03:38 crc kubenswrapper[4893]: E0128 15:03:38.940923 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.440887449 +0000 UTC m=+137.214502477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.941571 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.942420 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:38 crc kubenswrapper[4893]: E0128 15:03:38.943042 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.443025658 +0000 UTC m=+137.216640686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:38 crc kubenswrapper[4893]: I0128 15:03:38.961913 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.043717 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.044525 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.544507228 +0000 UTC m=+137.318122256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.146985 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.147457 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.647438548 +0000 UTC m=+137.421053576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.248089 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.248277 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.74824622 +0000 UTC m=+137.521861248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.248427 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.248914 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.748898088 +0000 UTC m=+137.522513126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.349609 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.349734 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.84970556 +0000 UTC m=+137.623320588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.350315 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.350825 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.85080236 +0000 UTC m=+137.624417468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.397521 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n2td9"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.409780 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.413337 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.454728 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.458054 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:39.958020998 +0000 UTC m=+137.731636036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.464349 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vd8ml"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.494123 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8ppbb"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.498745 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.559577 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.560124 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.060109614 +0000 UTC m=+137.833724642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.575113 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-z2gjc" podStartSLOduration=117.575097305 podStartE2EDuration="1m57.575097305s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:39.533566948 +0000 UTC m=+137.307181996" watchObservedRunningTime="2026-01-28 15:03:39.575097305 +0000 UTC m=+137.348712333" Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.640716 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5cr6t"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.656347 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.656430 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-79k8x"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.660229 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.660234 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.660335 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.16030996 +0000 UTC m=+137.933924988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.660708 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.661832 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.161814581 +0000 UTC m=+137.935429609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: W0128 15:03:39.743032 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7dc96ec_9b60_46a3_b120_5b75ba5e7124.slice/crio-25ccefc8ffb736a7d3ea0a1ea4e1ba4573588736a891f88d4d7747fe9fe8fbb1 WatchSource:0}: Error finding container 25ccefc8ffb736a7d3ea0a1ea4e1ba4573588736a891f88d4d7747fe9fe8fbb1: Status 404 returned error can't find the container with id 25ccefc8ffb736a7d3ea0a1ea4e1ba4573588736a891f88d4d7747fe9fe8fbb1 Jan 28 15:03:39 crc kubenswrapper[4893]: W0128 15:03:39.747255 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70e61761_82dd_4ac8_a847_1727769f4424.slice/crio-85847cf247ea4e6ecb8cfe39126c16c4e0fb9c31d4d3a227f31c4b6472b4dcdc WatchSource:0}: Error finding container 85847cf247ea4e6ecb8cfe39126c16c4e0fb9c31d4d3a227f31c4b6472b4dcdc: Status 404 returned error can't find the container with id 85847cf247ea4e6ecb8cfe39126c16c4e0fb9c31d4d3a227f31c4b6472b4dcdc Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.751763 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.754406 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.764507 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.764693 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.264656908 +0000 UTC m=+138.038271936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.764854 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.765312 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.265302016 +0000 UTC m=+138.038917044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.771624 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-stl87" event={"ID":"d113579e-5e8a-4d1e-a4db-a739dd0ab66c","Type":"ContainerStarted","Data":"60b7dd754e8c9ccf904866ca5067604a3c1c890d10670df0e244a82c91e3059f"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.778558 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-lcqnw" podStartSLOduration=117.778524598 podStartE2EDuration="1m57.778524598s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:39.772501914 +0000 UTC m=+137.546116942" watchObservedRunningTime="2026-01-28 15:03:39.778524598 +0000 UTC m=+137.552139636" Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.782605 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n2td9" event={"ID":"c158949d-4568-4cc2-8e24-8f5f24069664","Type":"ContainerStarted","Data":"c6f89fee62c783576f735fecadf9fb0d4320fb9d253f940ae5d942e9d9e8d643"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.799694 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8" event={"ID":"9c58e09f-229a-41a8-814f-d2d919d706f6","Type":"ContainerStarted","Data":"8ed538ed783ab2141ba396387527e73125605d993b2d9efe458b285a479cf88a"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.802402 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cqgww"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.808213 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fv7gf"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.810058 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" event={"ID":"64a0f7cc-6a3a-4604-a964-6fbd123e4d24","Type":"ContainerStarted","Data":"2b68f65323e6680480829dca3b37cd196b0beeeba966c7315bf0e36af13b0d7c"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.813648 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vzxzx" event={"ID":"7d249efd-e40b-430f-98ec-9ad9c4e5cf70","Type":"ContainerStarted","Data":"61e083f0dd5ac76c19377a90c083b3ee94542a7d8e66df52259c52833a38e95b"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.825354 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" event={"ID":"3a430c60-e09a-473a-8938-c6e67c6fe89f","Type":"ContainerStarted","Data":"026df773064c90ef35ead47c48040ac5fc34ecf3719761850d306efa00144e58"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.826169 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.833923 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99" event={"ID":"36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6","Type":"ContainerStarted","Data":"d53b34a863b16aa186d4f5be57873802c286d63eb1beaa79b91373ef8a542152"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.834152 4893 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2zfnn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.834200 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" podUID="3a430c60-e09a-473a-8938-c6e67c6fe89f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.839960 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" event={"ID":"ef3c4a5f-725d-4be0-b800-ab95fba9e33e","Type":"ContainerStarted","Data":"bf269e64e08be86359579fc2cdf102caf2b5610bbaf4ce7cc95d21c1d69a9a31"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.842198 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" event={"ID":"feaf053e-d992-479b-b7ac-f7383e0b4b35","Type":"ContainerStarted","Data":"0019da72b398ba32c789693724241126a01f7604c9416f74b5fae6af133b4fc2"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.842235 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" event={"ID":"feaf053e-d992-479b-b7ac-f7383e0b4b35","Type":"ContainerStarted","Data":"710f7e777770181058cf45d2025a1d1a810c3b4ed5c55e9149cffa6e1e8937b0"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.843300 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.858813 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" event={"ID":"81d03df1-14b4-4475-944e-bf81e7abca38","Type":"ContainerStarted","Data":"f184625ca95d9baf1089ff450d34df48388dbcb2319ec982e3af12b263ade7e1"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.858875 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" event={"ID":"81d03df1-14b4-4475-944e-bf81e7abca38","Type":"ContainerStarted","Data":"ea9f7ff779bac53ad407a8f4e08a7bb2aae5f56c05659ff9a389f5be48003a57"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.863775 4893 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6q42k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.863852 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" podUID="feaf053e-d992-479b-b7ac-f7383e0b4b35" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.864100 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8" event={"ID":"65b08b40-b2e6-4db4-8cb1-14a48a144f3b","Type":"ContainerStarted","Data":"8e56b6ca41b08bc827a8d5c155740227f3b426ab4ce087c6b89633260e13be35"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.866449 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.866667 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.366641523 +0000 UTC m=+138.140256551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.866765 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.868351 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.368336099 +0000 UTC m=+138.141951207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.868799 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx" event={"ID":"285eb7ab-eacb-482f-bafb-45871026d2b1","Type":"ContainerStarted","Data":"00b4a66e1674a3a023ea3a411fcc447b41ec57733ba7e9d93b622b4aab5c03a1"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.871308 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" event={"ID":"cc9b874e-9d92-4b60-affa-24d0f2286cb8","Type":"ContainerStarted","Data":"8e37991aa34c7c961b4f53c1b0fdd1cd1f34d0c4f2b10a0b6840d3858a16e781"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.871369 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" event={"ID":"cc9b874e-9d92-4b60-affa-24d0f2286cb8","Type":"ContainerStarted","Data":"70610674582abd0bd3a54d6fee75627f953310a44684dd42f1675db2cc49c751"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.874441 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5cr6t" event={"ID":"d7dc96ec-9b60-46a3-b120-5b75ba5e7124","Type":"ContainerStarted","Data":"25ccefc8ffb736a7d3ea0a1ea4e1ba4573588736a891f88d4d7747fe9fe8fbb1"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.878139 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xgk22" event={"ID":"912dd730-f999-4811-bf47-485755b7d949","Type":"ContainerStarted","Data":"327c3aeb96e04a26a99518a947bf108fd5f2eece3de9efb9c80c14e794fdb09b"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.881523 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" event={"ID":"5ea57229-2fa9-47b3-a2f1-6c28d9434923","Type":"ContainerStarted","Data":"79b727def340dca5092e676afc153a0cc1f65fc9dafb1379d90881906b4aeb8f"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.883800 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" event={"ID":"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e","Type":"ContainerStarted","Data":"c62f4360ba209d8a01f6d8298d74bf6bdd9c0b6cbfaeef5c17d0ba7b5e6a88bb"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.883828 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" event={"ID":"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e","Type":"ContainerStarted","Data":"9dc84e4bc4f50bc0c9bf471442861127c14f9d2270653181d414466feefd8f6e"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.884788 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.886467 4893 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-zgw9r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.886514 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" podUID="17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.886599 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" event={"ID":"b5a371e6-d5dc-4971-8abf-c193da52013c","Type":"ContainerStarted","Data":"4a4342f8a1ab49b27c3a520725c46962ccd6e6937700bfa4fcd691ad2386cf5e"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.887769 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-79k8x" event={"ID":"ec119cba-64e9-448f-8fa8-da55fd66884f","Type":"ContainerStarted","Data":"8dc652c9976eb4c6a5f05fc1ed0556c04fac7d6fc9b7ab3cdb0b3b0bd68c2797"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.890113 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" event={"ID":"ca41e21f-75c8-48bc-8611-85bebde78fad","Type":"ContainerStarted","Data":"9823636e9780f2080879f63ad680c988c186b1bb36a3609a42c916dbb1992cbb"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.930050 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk" event={"ID":"d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d","Type":"ContainerStarted","Data":"fde22ab7d1e174886e2d4806a69927b253943bce7e130ea63f016a45dc489727"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.954526 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.956737 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" event={"ID":"7ba6c64d-c248-4150-93c7-5acf1fcbadfd","Type":"ContainerStarted","Data":"40bfc7821e5b076f3da0af8ca176299e45783116a8987b72839761e276273c09"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.957708 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gnmz9"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.960878 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" event={"ID":"70e61761-82dd-4ac8-a847-1727769f4424","Type":"ContainerStarted","Data":"85847cf247ea4e6ecb8cfe39126c16c4e0fb9c31d4d3a227f31c4b6472b4dcdc"} Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.961824 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.961857 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.961956 4893 patch_prober.go:28] interesting pod/console-operator-58897d9998-8jcmm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.962145 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8jcmm" podUID="7fbb0ada-30b2-4b03-bb9a-456f07e78a42" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.968458 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.968814 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.468783991 +0000 UTC m=+138.242399019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.987066 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:39 crc kubenswrapper[4893]: E0128 15:03:39.987942 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.487927706 +0000 UTC m=+138.261542734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.993227 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-d5fwk"] Jan 28 15:03:39 crc kubenswrapper[4893]: I0128 15:03:39.996065 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n2xxg"] Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.038933 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2"] Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.055438 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzfvj"] Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.061865 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vh5rz"] Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.065209 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt"] Jan 28 15:03:40 crc kubenswrapper[4893]: W0128 15:03:40.066650 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73c142b0_ef25_4567_a816_965a127760af.slice/crio-f9518a5356108ed9e07fbddbfc5042886ce6048ac403bd5b547226c1456d250c WatchSource:0}: Error finding container f9518a5356108ed9e07fbddbfc5042886ce6048ac403bd5b547226c1456d250c: Status 404 returned error can't find the container with id f9518a5356108ed9e07fbddbfc5042886ce6048ac403bd5b547226c1456d250c Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.067529 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td"] Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.069812 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69"] Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.088010 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:40 crc kubenswrapper[4893]: E0128 15:03:40.088180 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.588153012 +0000 UTC m=+138.361768040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.088512 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:40 crc kubenswrapper[4893]: E0128 15:03:40.090436 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.590418123 +0000 UTC m=+138.364033221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:40 crc kubenswrapper[4893]: W0128 15:03:40.150560 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03627d33_2baf_4ffe_9af2_ad83eb61dd9c.slice/crio-85255c472b1fe21f85be59a3651b84b67f8e9cf4067a704d030d7a41de187bec WatchSource:0}: Error finding container 85255c472b1fe21f85be59a3651b84b67f8e9cf4067a704d030d7a41de187bec: Status 404 returned error can't find the container with id 85255c472b1fe21f85be59a3651b84b67f8e9cf4067a704d030d7a41de187bec Jan 28 15:03:40 crc kubenswrapper[4893]: W0128 15:03:40.153912 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a587792_e86e_434f_873e_c7ce3aac8bce.slice/crio-b3d70c59917687379961107605763934b1bb3e879ddf786258ad29c437713686 WatchSource:0}: Error finding container b3d70c59917687379961107605763934b1bb3e879ddf786258ad29c437713686: Status 404 returned error can't find the container with id b3d70c59917687379961107605763934b1bb3e879ddf786258ad29c437713686 Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.189719 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:40 crc kubenswrapper[4893]: E0128 15:03:40.190255 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.690195987 +0000 UTC m=+138.463811015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.251269 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8jcmm" podStartSLOduration=118.25124487 podStartE2EDuration="1m58.25124487s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:40.217743312 +0000 UTC m=+137.991358340" watchObservedRunningTime="2026-01-28 15:03:40.25124487 +0000 UTC m=+138.024859898" Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.254666 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hltf8" podStartSLOduration=118.254585791 podStartE2EDuration="1m58.254585791s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:40.25271565 +0000 UTC m=+138.026330678" watchObservedRunningTime="2026-01-28 15:03:40.254585791 +0000 UTC m=+138.028200839" Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.291076 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:40 crc kubenswrapper[4893]: E0128 15:03:40.291417 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.79140404 +0000 UTC m=+138.565019068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.334662 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-vzxzx" podStartSLOduration=118.334641865 podStartE2EDuration="1m58.334641865s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:40.303262905 +0000 UTC m=+138.076877933" watchObservedRunningTime="2026-01-28 15:03:40.334641865 +0000 UTC m=+138.108256893" Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.335798 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-qvscx" podStartSLOduration=118.335790136 podStartE2EDuration="1m58.335790136s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:40.334601804 +0000 UTC m=+138.108216832" watchObservedRunningTime="2026-01-28 15:03:40.335790136 +0000 UTC m=+138.109405164" Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.392585 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:40 crc kubenswrapper[4893]: E0128 15:03:40.392722 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.892683325 +0000 UTC m=+138.666298353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.393090 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:40 crc kubenswrapper[4893]: E0128 15:03:40.393562 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.893554539 +0000 UTC m=+138.667169567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.415824 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xgk22" podStartSLOduration=118.415804469 podStartE2EDuration="1m58.415804469s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:40.413071544 +0000 UTC m=+138.186686572" watchObservedRunningTime="2026-01-28 15:03:40.415804469 +0000 UTC m=+138.189419497" Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.436046 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.438338 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.438436 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.451686 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" podStartSLOduration=118.451667321 podStartE2EDuration="1m58.451667321s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:40.450740516 +0000 UTC m=+138.224355544" watchObservedRunningTime="2026-01-28 15:03:40.451667321 +0000 UTC m=+138.225282349" Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.494856 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:40 crc kubenswrapper[4893]: E0128 15:03:40.495097 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.99506454 +0000 UTC m=+138.768679568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.495520 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:40 crc kubenswrapper[4893]: E0128 15:03:40.495867 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:40.995855061 +0000 UTC m=+138.769470079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.499367 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" podStartSLOduration=118.499315256 podStartE2EDuration="1m58.499315256s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:40.495885532 +0000 UTC m=+138.269500580" watchObservedRunningTime="2026-01-28 15:03:40.499315256 +0000 UTC m=+138.272930284" Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.576938 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-sfkds" podStartSLOduration=118.576915422 podStartE2EDuration="1m58.576915422s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:40.572201703 +0000 UTC m=+138.345816731" watchObservedRunningTime="2026-01-28 15:03:40.576915422 +0000 UTC m=+138.350530450" Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.597419 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:40 crc kubenswrapper[4893]: E0128 15:03:40.597861 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:41.097833506 +0000 UTC m=+138.871448534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.597926 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:40 crc kubenswrapper[4893]: E0128 15:03:40.598279 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:41.098263638 +0000 UTC m=+138.871878666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.613580 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" podStartSLOduration=118.613551486 podStartE2EDuration="1m58.613551486s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:40.611926662 +0000 UTC m=+138.385541690" watchObservedRunningTime="2026-01-28 15:03:40.613551486 +0000 UTC m=+138.387166514" Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.662208 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" podStartSLOduration=117.662183799 podStartE2EDuration="1m57.662183799s" podCreationTimestamp="2026-01-28 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:40.654028546 +0000 UTC m=+138.427643574" watchObservedRunningTime="2026-01-28 15:03:40.662183799 +0000 UTC m=+138.435798827" Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.691184 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-r6dw8" podStartSLOduration=118.691157323 podStartE2EDuration="1m58.691157323s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:40.68998763 +0000 UTC m=+138.463602668" watchObservedRunningTime="2026-01-28 15:03:40.691157323 +0000 UTC m=+138.464772351" Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.699998 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:40 crc kubenswrapper[4893]: E0128 15:03:40.702032 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:41.20199788 +0000 UTC m=+138.975612918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.804332 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:40 crc kubenswrapper[4893]: E0128 15:03:40.804760 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:41.304740494 +0000 UTC m=+139.078355522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.905990 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:40 crc kubenswrapper[4893]: E0128 15:03:40.906211 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:41.406155713 +0000 UTC m=+139.179770741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.906567 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:40 crc kubenswrapper[4893]: E0128 15:03:40.906994 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:41.406976065 +0000 UTC m=+139.180591093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.967549 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk" event={"ID":"d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d","Type":"ContainerStarted","Data":"fd736e34c849cfbafa49d0c216da57a2cc457b45b534fcb314b68fec8721afdf"} Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.971818 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99" event={"ID":"36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6","Type":"ContainerStarted","Data":"d1764297fb004adbc7ed2c5fd93727903293ad0a0818952cb274eb75b7c79247"} Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.973969 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" event={"ID":"9a587792-e86e-434f-873e-c7ce3aac8bce","Type":"ContainerStarted","Data":"b3d70c59917687379961107605763934b1bb3e879ddf786258ad29c437713686"} Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.977028 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" event={"ID":"64a0f7cc-6a3a-4604-a964-6fbd123e4d24","Type":"ContainerStarted","Data":"c4a557c70e57cec3bb6185b8ff5cb59fce548f98af6079209837c8c1d27f8000"} Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.979980 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" event={"ID":"b5a371e6-d5dc-4971-8abf-c193da52013c","Type":"ContainerStarted","Data":"454eaf7ce10338a288bfc49269a2bc9cdea243ac7f60fcf90ed62ab8627e447e"} Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.982066 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5cr6t" event={"ID":"d7dc96ec-9b60-46a3-b120-5b75ba5e7124","Type":"ContainerStarted","Data":"94a1d7709d4820a9a72917a8be4e01c09427891b9ee6fb7b668337fabc519c30"} Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.984964 4893 generic.go:334] "Generic (PLEG): container finished" podID="ca41e21f-75c8-48bc-8611-85bebde78fad" containerID="01a234a53a24683857792aa79f9ef5d0883fae30d2329a29fe88f46746eaea92" exitCode=0 Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.985038 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" event={"ID":"ca41e21f-75c8-48bc-8611-85bebde78fad","Type":"ContainerDied","Data":"01a234a53a24683857792aa79f9ef5d0883fae30d2329a29fe88f46746eaea92"} Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.987465 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fv7gf" event={"ID":"1ab80115-9e4f-48a1-8c19-a89f554962cb","Type":"ContainerStarted","Data":"b3ef0a8726a4baa6d5228801224e72864c86711b7c67f866742f49119d22e12f"} Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.991973 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-stl87" event={"ID":"d113579e-5e8a-4d1e-a4db-a739dd0ab66c","Type":"ContainerStarted","Data":"bc2ccab71778780ac765aa5d09ca7bcd389f660b56210b39b8d61b4c9aadb04f"} Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.993727 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-d5fwk" event={"ID":"803a27b8-0f88-47f7-b1aa-81f57e6c7238","Type":"ContainerStarted","Data":"b16c82c6e3c793ed73c692abd847652495badb4612e4b817ef777a3783cb37bf"} Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.995800 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" event={"ID":"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9","Type":"ContainerStarted","Data":"7d5e4d8a4a46dbd6bd43a1828c41dad43203d84339858c3bb97317060165ce20"} Jan 28 15:03:40 crc kubenswrapper[4893]: I0128 15:03:40.998375 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cqgww" event={"ID":"1bc93c92-2229-4d87-919d-d4104cf7bcab","Type":"ContainerStarted","Data":"668938ab4d40721475c6d243e24c931dffc0cc2d0ad33c243bcf3515ab940152"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.000289 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2" event={"ID":"ad98500b-dd9c-4691-9a0b-0e157e32d90d","Type":"ContainerStarted","Data":"3e524ef006fa5ba6337e8a9343e3a630d1d49285c69316bf7bc986ca726a476d"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.002810 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4" event={"ID":"c30629f6-a476-415a-9fae-6c70598bd3c3","Type":"ContainerStarted","Data":"b23e162afe0ca942202317f9ad94a83ed09963681ef7d69f79fc9b5d42687159"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.005438 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" event={"ID":"ef3c4a5f-725d-4be0-b800-ab95fba9e33e","Type":"ContainerStarted","Data":"06c9f6d6c535af3d5d2606c4463ad8cc5add8306340905e7767044daf51a21c6"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.006741 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x" event={"ID":"96a7f787-34e0-4f85-9db1-33722d80495c","Type":"ContainerStarted","Data":"e4a9f79054a48a6d19b2790188dcc0373cfc9d60f2cff2e7100ee8a6fda8325b"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.007760 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:41 crc kubenswrapper[4893]: E0128 15:03:41.008651 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:41.508470246 +0000 UTC m=+139.282085274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.017179 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" event={"ID":"4f1e2d4c-d68d-4905-ac09-97eead457a6a","Type":"ContainerStarted","Data":"9bbe09109d5611526cdc9b553107cdb86a521a4f43fccf0a3ed8523666678129"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.018313 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-stl87" podStartSLOduration=6.018301736 podStartE2EDuration="6.018301736s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:41.015089617 +0000 UTC m=+138.788704645" watchObservedRunningTime="2026-01-28 15:03:41.018301736 +0000 UTC m=+138.791916764" Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.019178 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" event={"ID":"fb8b8327-6a52-41f7-b512-f6572f06c3c4","Type":"ContainerStarted","Data":"7a271c5543dfcef40ad0435cc7a82ec5e49ffed2d1f63d82f3fcba73f0508d83"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.021200 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dqcjb" event={"ID":"5b6faa0a-407c-485c-9d10-0ed877cdfe30","Type":"ContainerStarted","Data":"70b4ad61f07dc39acaf4a4cd6db1d09c658426553caf13c635dcdc0bcfe5a47b"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.022931 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td" event={"ID":"03627d33-2baf-4ffe-9af2-ad83eb61dd9c","Type":"ContainerStarted","Data":"85255c472b1fe21f85be59a3651b84b67f8e9cf4067a704d030d7a41de187bec"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.024066 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" event={"ID":"c4e89991-7235-4188-8c4a-36d2dc3945f5","Type":"ContainerStarted","Data":"61102c81329592f8aa87c8b328e7a449d16fef7553dbe7645d03673cd9c2d8e1"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.026057 4893 generic.go:334] "Generic (PLEG): container finished" podID="cc9b874e-9d92-4b60-affa-24d0f2286cb8" containerID="8e37991aa34c7c961b4f53c1b0fdd1cd1f34d0c4f2b10a0b6840d3858a16e781" exitCode=0 Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.026126 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" event={"ID":"cc9b874e-9d92-4b60-affa-24d0f2286cb8","Type":"ContainerDied","Data":"8e37991aa34c7c961b4f53c1b0fdd1cd1f34d0c4f2b10a0b6840d3858a16e781"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.027620 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" event={"ID":"7ba6c64d-c248-4150-93c7-5acf1fcbadfd","Type":"ContainerStarted","Data":"55591178034a547077c83ec93ce73bdbd74cd26d44d06b451c1a2de5db4f1625"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.029024 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vh5rz" event={"ID":"73c142b0-ef25-4567-a816-965a127760af","Type":"ContainerStarted","Data":"f9518a5356108ed9e07fbddbfc5042886ce6048ac403bd5b547226c1456d250c"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.047208 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" event={"ID":"82b04f86-ec41-4af7-9f43-02928feaabd8","Type":"ContainerStarted","Data":"4f290c38ac9f5995708f078c37f6a5e772535322a30c6d12592cda9af789644c"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.051109 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xgk22" event={"ID":"912dd730-f999-4811-bf47-485755b7d949","Type":"ContainerStarted","Data":"6cf6f5b359c2aeb0b71b6125a7eeba9830ab53ca12b631453f31dc01b3bb6d21"} Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.051701 4893 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2zfnn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.051759 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" podUID="3a430c60-e09a-473a-8938-c6e67c6fe89f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.052192 4893 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6q42k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.052237 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" podUID="feaf053e-d992-479b-b7ac-f7383e0b4b35" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.052328 4893 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-zgw9r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.052351 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" podUID="17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.052797 4893 patch_prober.go:28] interesting pod/console-operator-58897d9998-8jcmm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.052842 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8jcmm" podUID="7fbb0ada-30b2-4b03-bb9a-456f07e78a42" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.114615 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:41 crc kubenswrapper[4893]: E0128 15:03:41.115387 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:41.615361104 +0000 UTC m=+139.388976122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.215702 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:41 crc kubenswrapper[4893]: E0128 15:03:41.215846 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:41.715828897 +0000 UTC m=+139.489443925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.216200 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:41 crc kubenswrapper[4893]: E0128 15:03:41.216571 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:41.716561327 +0000 UTC m=+139.490176355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.316925 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:41 crc kubenswrapper[4893]: E0128 15:03:41.317192 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:41.817158563 +0000 UTC m=+139.590773591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.317398 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:41 crc kubenswrapper[4893]: E0128 15:03:41.317781 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:41.81776609 +0000 UTC m=+139.591381118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.418992 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:41 crc kubenswrapper[4893]: E0128 15:03:41.419344 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:41.919321423 +0000 UTC m=+139.692936471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.438682 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.438745 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.521248 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:41 crc kubenswrapper[4893]: E0128 15:03:41.523083 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.023059725 +0000 UTC m=+139.796674813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.622400 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:41 crc kubenswrapper[4893]: E0128 15:03:41.622929 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.122893429 +0000 UTC m=+139.896508467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.724013 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:41 crc kubenswrapper[4893]: E0128 15:03:41.724378 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.22436455 +0000 UTC m=+139.997979578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.825399 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:41 crc kubenswrapper[4893]: E0128 15:03:41.825594 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.325565783 +0000 UTC m=+140.099180831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.825663 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:41 crc kubenswrapper[4893]: E0128 15:03:41.826045 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.326034375 +0000 UTC m=+140.099649423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.927398 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:41 crc kubenswrapper[4893]: E0128 15:03:41.927597 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.427566887 +0000 UTC m=+140.201181915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:41 crc kubenswrapper[4893]: I0128 15:03:41.927788 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:41 crc kubenswrapper[4893]: E0128 15:03:41.928145 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.428138272 +0000 UTC m=+140.201753300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.030252 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.030557 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.530518578 +0000 UTC m=+140.304133606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.031088 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.031564 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.531553786 +0000 UTC m=+140.305168814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.063996 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" event={"ID":"5ea57229-2fa9-47b3-a2f1-6c28d9434923","Type":"ContainerStarted","Data":"c4f780df834859ee2d5d2eb44561cdadc361deb72226d7cd952900bfeacd9b20"} Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.065741 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n2td9" event={"ID":"c158949d-4568-4cc2-8e24-8f5f24069664","Type":"ContainerStarted","Data":"b7563d5e434758d5e316fd596a1f3a13748c985f67e4a116ac201e302759d9e2"} Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.067637 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" event={"ID":"70e61761-82dd-4ac8-a847-1727769f4424","Type":"ContainerStarted","Data":"6bdc499c7e005d1c8dcb20fc5a067717620c7df8396b4fbbf84d56ca8f3e40b6"} Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.071598 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-79k8x" event={"ID":"ec119cba-64e9-448f-8fa8-da55fd66884f","Type":"ContainerStarted","Data":"18fc73aecda57ee8aecfdda4905a3e69ee0d1a1909e987d4dfd1dc9939cce49c"} Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.074600 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk" event={"ID":"d4b3623d-53f7-4d3a-b3d7-b55fd0736e9d","Type":"ContainerStarted","Data":"36e11a124d9c48f30be4fb8188f58685397861bc1052dc316bf58267a653689f"} Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.076599 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fv7gf" event={"ID":"1ab80115-9e4f-48a1-8c19-a89f554962cb","Type":"ContainerStarted","Data":"756e503076033b3b213e09495a83003d206c3dc1de63d01fa495f8165e7cfbe7"} Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.077163 4893 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2zfnn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.077217 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" podUID="3a430c60-e09a-473a-8938-c6e67c6fe89f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.077443 4893 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-zgw9r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.077587 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" podUID="17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.077695 4893 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6q42k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.077798 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" podUID="feaf053e-d992-479b-b7ac-f7383e0b4b35" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.132433 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.132555 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.632533303 +0000 UTC m=+140.406148331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.133618 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.133972 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.633952892 +0000 UTC m=+140.407567920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.236852 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.237045 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.737007746 +0000 UTC m=+140.510622774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.237421 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.237827 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.737813848 +0000 UTC m=+140.511428876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.338714 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.339000 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.838966229 +0000 UTC m=+140.612581267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.437911 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.437989 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.440557 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.441025 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:42.941008734 +0000 UTC m=+140.714623762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.541265 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.541430 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.041397215 +0000 UTC m=+140.815012243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.541548 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.541980 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.041967341 +0000 UTC m=+140.815582369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.642839 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.642859 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.142838985 +0000 UTC m=+140.916454013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.643269 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.643674 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.143662467 +0000 UTC m=+140.917277495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.743969 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.744184 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.24415125 +0000 UTC m=+141.017766278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.744337 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.744695 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.244687855 +0000 UTC m=+141.018302873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.845753 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.845968 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.345933468 +0000 UTC m=+141.119548496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.846085 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.846387 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.34637456 +0000 UTC m=+141.119989588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.947579 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.947778 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.447753548 +0000 UTC m=+141.221368566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:42 crc kubenswrapper[4893]: I0128 15:03:42.948163 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:42 crc kubenswrapper[4893]: E0128 15:03:42.948543 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.44853308 +0000 UTC m=+141.222148108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.049650 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:43 crc kubenswrapper[4893]: E0128 15:03:43.049991 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.549975309 +0000 UTC m=+141.323590337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.111978 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" event={"ID":"64a0f7cc-6a3a-4604-a964-6fbd123e4d24","Type":"ContainerStarted","Data":"8af55fd438d237d583b81adf9bee7eda39a469e658088d25d5207fa4cfa18546"} Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.122293 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cqgww" event={"ID":"1bc93c92-2229-4d87-919d-d4104cf7bcab","Type":"ContainerStarted","Data":"8810d830a7ef450ec876be93a951355f5af88580bbeedc29482d5eae87e171b9"} Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.138853 4893 generic.go:334] "Generic (PLEG): container finished" podID="5ea57229-2fa9-47b3-a2f1-6c28d9434923" containerID="c4f780df834859ee2d5d2eb44561cdadc361deb72226d7cd952900bfeacd9b20" exitCode=0 Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.138943 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" event={"ID":"5ea57229-2fa9-47b3-a2f1-6c28d9434923","Type":"ContainerDied","Data":"c4f780df834859ee2d5d2eb44561cdadc361deb72226d7cd952900bfeacd9b20"} Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.146389 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" event={"ID":"ef3c4a5f-725d-4be0-b800-ab95fba9e33e","Type":"ContainerStarted","Data":"1f6a787299dcab9289ea6edb38ebf4653462746db57faddb8699f523e9405e46"} Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.152206 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:43 crc kubenswrapper[4893]: E0128 15:03:43.152940 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.65292195 +0000 UTC m=+141.426536988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.157431 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" event={"ID":"82b04f86-ec41-4af7-9f43-02928feaabd8","Type":"ContainerStarted","Data":"7e0fa23fecb3e44743c2a949d19bc9b7bdc3b88f18d25cc02b352e76ce1af225"} Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.175511 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" event={"ID":"4f1e2d4c-d68d-4905-ac09-97eead457a6a","Type":"ContainerStarted","Data":"2fad22fce2560297c1d0d337a329af3a5b264ec2c026ae5679b6aaf3a61d9082"} Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.181591 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td" event={"ID":"03627d33-2baf-4ffe-9af2-ad83eb61dd9c","Type":"ContainerStarted","Data":"ddb412e7102f45a7c4755073f37beb8800f2f608aaf183490ac19d36b382df8f"} Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.186923 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4" event={"ID":"c30629f6-a476-415a-9fae-6c70598bd3c3","Type":"ContainerStarted","Data":"a03a5f0ac0e0240e0bb32a7b415f2f53df0dcb666c36f81396189e031c98b0d0"} Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.210308 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.210800 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.211195 4893 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nj5sn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.211235 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" podUID="b5a371e6-d5dc-4971-8abf-c193da52013c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.211470 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cqgww" podStartSLOduration=120.211455203 podStartE2EDuration="2m0.211455203s" podCreationTimestamp="2026-01-28 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:43.208636366 +0000 UTC m=+140.982251404" watchObservedRunningTime="2026-01-28 15:03:43.211455203 +0000 UTC m=+140.985070231" Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.222741 4893 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-tkj4d container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.222841 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" podUID="7ba6c64d-c248-4150-93c7-5acf1fcbadfd" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.252921 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:43 crc kubenswrapper[4893]: E0128 15:03:43.253371 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.75334054 +0000 UTC m=+141.526955588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.256643 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-fv7gf" podStartSLOduration=120.256624891 podStartE2EDuration="2m0.256624891s" podCreationTimestamp="2026-01-28 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:43.252621741 +0000 UTC m=+141.026236789" watchObservedRunningTime="2026-01-28 15:03:43.256624891 +0000 UTC m=+141.030239919" Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.280003 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" podStartSLOduration=120.27997921 podStartE2EDuration="2m0.27997921s" podCreationTimestamp="2026-01-28 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:43.271247131 +0000 UTC m=+141.044862159" watchObservedRunningTime="2026-01-28 15:03:43.27997921 +0000 UTC m=+141.053594228" Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.290823 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bdn99" podStartSLOduration=121.290800337 podStartE2EDuration="2m1.290800337s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:43.287804395 +0000 UTC m=+141.061419433" watchObservedRunningTime="2026-01-28 15:03:43.290800337 +0000 UTC m=+141.064415365" Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.361137 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:43 crc kubenswrapper[4893]: E0128 15:03:43.370315 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.870290534 +0000 UTC m=+141.643905562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.399596 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" podStartSLOduration=120.399572537 podStartE2EDuration="2m0.399572537s" podCreationTimestamp="2026-01-28 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:43.319053191 +0000 UTC m=+141.092668219" watchObservedRunningTime="2026-01-28 15:03:43.399572537 +0000 UTC m=+141.173187565" Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.400347 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" podStartSLOduration=121.400342758 podStartE2EDuration="2m1.400342758s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:43.396237255 +0000 UTC m=+141.169852303" watchObservedRunningTime="2026-01-28 15:03:43.400342758 +0000 UTC m=+141.173957786" Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.420377 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-nhwwk" podStartSLOduration=121.420363316 podStartE2EDuration="2m1.420363316s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:43.41977516 +0000 UTC m=+141.193390188" watchObservedRunningTime="2026-01-28 15:03:43.420363316 +0000 UTC m=+141.193978344" Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.437487 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.437543 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.458837 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-79k8x" podStartSLOduration=8.45881409 podStartE2EDuration="8.45881409s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:43.457577866 +0000 UTC m=+141.231192924" watchObservedRunningTime="2026-01-28 15:03:43.45881409 +0000 UTC m=+141.232429118" Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.462012 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:43 crc kubenswrapper[4893]: E0128 15:03:43.462552 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:43.962532182 +0000 UTC m=+141.736147210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.564071 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:43 crc kubenswrapper[4893]: E0128 15:03:43.564443 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.064431354 +0000 UTC m=+141.838046382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.664733 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:43 crc kubenswrapper[4893]: E0128 15:03:43.664914 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.164886295 +0000 UTC m=+141.938501333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.665364 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:43 crc kubenswrapper[4893]: E0128 15:03:43.665767 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.16575979 +0000 UTC m=+141.939374808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.767080 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:43 crc kubenswrapper[4893]: E0128 15:03:43.767557 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.267456106 +0000 UTC m=+142.041071174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.767929 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:43 crc kubenswrapper[4893]: E0128 15:03:43.768686 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.268659839 +0000 UTC m=+142.042274897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.868917 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:43 crc kubenswrapper[4893]: E0128 15:03:43.869096 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.36906531 +0000 UTC m=+142.142680338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.869305 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:43 crc kubenswrapper[4893]: E0128 15:03:43.869718 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.369704887 +0000 UTC m=+142.143319915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.971094 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:43 crc kubenswrapper[4893]: E0128 15:03:43.971541 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.471452345 +0000 UTC m=+142.245067413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:43 crc kubenswrapper[4893]: I0128 15:03:43.971957 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:43 crc kubenswrapper[4893]: E0128 15:03:43.972413 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.47238926 +0000 UTC m=+142.246004288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.073080 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.073315 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.573290195 +0000 UTC m=+142.346905223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.073386 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.073892 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.573880541 +0000 UTC m=+142.347495569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.113389 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.174861 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.175007 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.674985931 +0000 UTC m=+142.448600959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.175433 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.175882 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.675869516 +0000 UTC m=+142.449484544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.225342 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-d5fwk" event={"ID":"803a27b8-0f88-47f7-b1aa-81f57e6c7238","Type":"ContainerStarted","Data":"d8c6968f516c9de8bc254383a92a6abd90d898b3b082d156684370af669c1080"} Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.252301 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" event={"ID":"9a587792-e86e-434f-873e-c7ce3aac8bce","Type":"ContainerStarted","Data":"5f7dbf0ce267fc9b6893df92fe6adfff76f434e191e61748f30e887f981629b4"} Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.252557 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.256104 4893 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fzfvj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.256195 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" podUID="9a587792-e86e-434f-873e-c7ce3aac8bce" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.257695 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" event={"ID":"cc9b874e-9d92-4b60-affa-24d0f2286cb8","Type":"ContainerStarted","Data":"0dcc7c3cebe06c62944314aa45e03bd09aae1a10fc0072d203ab5a42cff7da84"} Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.258609 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.267859 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" event={"ID":"ca41e21f-75c8-48bc-8611-85bebde78fad","Type":"ContainerStarted","Data":"1289efaec68dc15e69450da291f5a4c0d9c5f85bc0a1f5575dcd62ae25432f29"} Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.276462 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vh5rz" event={"ID":"73c142b0-ef25-4567-a816-965a127760af","Type":"ContainerStarted","Data":"5af8b91dc6dbecf48a17946653504f2d61229379ab72dc104940ca459181c94f"} Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.277729 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.278796 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.778765785 +0000 UTC m=+142.552380813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.283143 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" podStartSLOduration=122.283120684 podStartE2EDuration="2m2.283120684s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:44.27969297 +0000 UTC m=+142.053308018" watchObservedRunningTime="2026-01-28 15:03:44.283120684 +0000 UTC m=+142.056735712" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.297746 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x" event={"ID":"96a7f787-34e0-4f85-9db1-33722d80495c","Type":"ContainerStarted","Data":"0815aa55acf9714f836ddb8ff23ec47f04032b4699010a66dd6f75ee13b1869c"} Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.310607 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" event={"ID":"fb8b8327-6a52-41f7-b512-f6572f06c3c4","Type":"ContainerStarted","Data":"faa18d1c8bcededbbe464d58218f7305c9a10df4bfd62566da6cf801eb7dcf80"} Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.316067 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" podStartSLOduration=122.316047366 podStartE2EDuration="2m2.316047366s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:44.311892232 +0000 UTC m=+142.085507250" watchObservedRunningTime="2026-01-28 15:03:44.316047366 +0000 UTC m=+142.089662394" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.316751 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" event={"ID":"c4e89991-7235-4188-8c4a-36d2dc3945f5","Type":"ContainerStarted","Data":"65adbda235261ba0310d34275ca3fb62b409fc4d75bbf03bdbfd4fb719d120f8"} Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.317801 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.322350 4893 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-r9mpt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.322403 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" podUID="c4e89991-7235-4188-8c4a-36d2dc3945f5" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.333766 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2" event={"ID":"ad98500b-dd9c-4691-9a0b-0e157e32d90d","Type":"ContainerStarted","Data":"27f4555917ebbe8f4266683eee5ce1ac8c48990890c2ea6453f2b88fa828dc81"} Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.337283 4893 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nj5sn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.337328 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" podUID="b5a371e6-d5dc-4971-8abf-c193da52013c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.338932 4893 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-tkj4d container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.341708 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" podUID="7ba6c64d-c248-4150-93c7-5acf1fcbadfd" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.338949 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" podStartSLOduration=122.338932913 podStartE2EDuration="2m2.338932913s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:44.336804604 +0000 UTC m=+142.110419622" watchObservedRunningTime="2026-01-28 15:03:44.338932913 +0000 UTC m=+142.112547941" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.361367 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kvxbc" podStartSLOduration=122.361346077 podStartE2EDuration="2m2.361346077s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:44.359698971 +0000 UTC m=+142.133313989" watchObservedRunningTime="2026-01-28 15:03:44.361346077 +0000 UTC m=+142.134961105" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.380555 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.383990 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.883972557 +0000 UTC m=+142.657587585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.392377 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-gnmz9" podStartSLOduration=122.392358687 podStartE2EDuration="2m2.392358687s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:44.391811142 +0000 UTC m=+142.165426190" watchObservedRunningTime="2026-01-28 15:03:44.392358687 +0000 UTC m=+142.165973715" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.423377 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nxd8x" podStartSLOduration=122.423348266 podStartE2EDuration="2m2.423348266s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:44.421694601 +0000 UTC m=+142.195309639" watchObservedRunningTime="2026-01-28 15:03:44.423348266 +0000 UTC m=+142.196963294" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.437026 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.437081 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.465620 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" podStartSLOduration=122.465592433 podStartE2EDuration="2m2.465592433s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:44.464574955 +0000 UTC m=+142.238190003" watchObservedRunningTime="2026-01-28 15:03:44.465592433 +0000 UTC m=+142.239207461" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.484074 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.484578 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:44.984559383 +0000 UTC m=+142.758174411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.485371 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4zlj2" podStartSLOduration=122.485346804 podStartE2EDuration="2m2.485346804s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:44.485101107 +0000 UTC m=+142.258716135" watchObservedRunningTime="2026-01-28 15:03:44.485346804 +0000 UTC m=+142.258961832" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.538664 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-8ppbb" podStartSLOduration=122.538633214 podStartE2EDuration="2m2.538633214s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:44.529761141 +0000 UTC m=+142.303376169" watchObservedRunningTime="2026-01-28 15:03:44.538633214 +0000 UTC m=+142.312248352" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.557763 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gr6td" podStartSLOduration=122.557745038 podStartE2EDuration="2m2.557745038s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:44.549702897 +0000 UTC m=+142.323317925" watchObservedRunningTime="2026-01-28 15:03:44.557745038 +0000 UTC m=+142.331360066" Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.585574 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.585936 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.08592307 +0000 UTC m=+142.859538088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.687397 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.687657 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.187622576 +0000 UTC m=+142.961237604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.688061 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.688444 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.188431629 +0000 UTC m=+142.962046657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.789388 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.789570 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.289541749 +0000 UTC m=+143.063156767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.789948 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.790490 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.290451623 +0000 UTC m=+143.064066711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.891617 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.891824 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.39179856 +0000 UTC m=+143.165413598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.892366 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.892763 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.392749626 +0000 UTC m=+143.166364654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.993819 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.994080 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.494041421 +0000 UTC m=+143.267656449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:44 crc kubenswrapper[4893]: I0128 15:03:44.994639 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:44 crc kubenswrapper[4893]: E0128 15:03:44.995040 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.495031938 +0000 UTC m=+143.268646966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.096366 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:45 crc kubenswrapper[4893]: E0128 15:03:45.097313 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.59729156 +0000 UTC m=+143.370906588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.198699 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:45 crc kubenswrapper[4893]: E0128 15:03:45.199096 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.699082579 +0000 UTC m=+143.472697607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.299831 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:45 crc kubenswrapper[4893]: E0128 15:03:45.300257 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.80024059 +0000 UTC m=+143.573855618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.339627 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" event={"ID":"4f1e2d4c-d68d-4905-ac09-97eead457a6a","Type":"ContainerStarted","Data":"f82dc004ba84612788abfd950cd453cab7f7c841293f4220d2c2b5e4780d0c1a"} Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.343227 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" event={"ID":"fb8b8327-6a52-41f7-b512-f6572f06c3c4","Type":"ContainerStarted","Data":"e961eb8c9a609f6a6bc2c2e6600fd530c6abff4253439b543d83306f4a9fea07"} Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.345179 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4" event={"ID":"c30629f6-a476-415a-9fae-6c70598bd3c3","Type":"ContainerStarted","Data":"3ea93c2fd49bd9716200532d160f884911a1957d0ddabe22c51bfd73917e0144"} Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.345637 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4" Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.347503 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5cr6t" event={"ID":"d7dc96ec-9b60-46a3-b120-5b75ba5e7124","Type":"ContainerStarted","Data":"0a487144ee0826b597aa3ad74318bef9361690ed8a01e15ae7a898bdfd60a68f"} Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.349705 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" event={"ID":"5ea57229-2fa9-47b3-a2f1-6c28d9434923","Type":"ContainerStarted","Data":"1e4b5f142c07365e8d19b5f0d6ef25e14401b979e6d18102fe79cf83f67c7c57"} Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.349730 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" event={"ID":"5ea57229-2fa9-47b3-a2f1-6c28d9434923","Type":"ContainerStarted","Data":"669e24f9d0b5582e5d0da8dcdc01eb741f6a942be4b7f53679fe8d9ba44cc771"} Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.351982 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vh5rz" event={"ID":"73c142b0-ef25-4567-a816-965a127760af","Type":"ContainerStarted","Data":"5f8486aab63b93a0f2530cf14a8720f640561fdba4a0e8af1eb0b548c1cea3b1"} Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.354313 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n2td9" event={"ID":"c158949d-4568-4cc2-8e24-8f5f24069664","Type":"ContainerStarted","Data":"5b7e627d09e8495ce74fbf173354faa3bcc36310fe464081f16d0bf2d1908c92"} Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.357586 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-d5fwk" event={"ID":"803a27b8-0f88-47f7-b1aa-81f57e6c7238","Type":"ContainerStarted","Data":"8005695e04069af4b2b4e8940fe93c8eca1e6fcd9ec4e7deb9a7ec4ad9ccc616"} Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.357621 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-d5fwk" Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.360151 4893 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-r9mpt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.360191 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" podUID="c4e89991-7235-4188-8c4a-36d2dc3945f5" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.360361 4893 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fzfvj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.360429 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" podUID="9a587792-e86e-434f-873e-c7ce3aac8bce" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.373228 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jmvhq" podStartSLOduration=123.373211529 podStartE2EDuration="2m3.373211529s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:45.369096677 +0000 UTC m=+143.142711715" watchObservedRunningTime="2026-01-28 15:03:45.373211529 +0000 UTC m=+143.146826557" Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.373799 4893 csr.go:261] certificate signing request csr-w5zd6 is approved, waiting to be issued Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.385087 4893 csr.go:257] certificate signing request csr-w5zd6 is issued Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.409648 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:45 crc kubenswrapper[4893]: E0128 15:03:45.412936 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.912921697 +0000 UTC m=+143.686536725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.415810 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" podStartSLOduration=123.415790256 podStartE2EDuration="2m3.415790256s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:45.411077547 +0000 UTC m=+143.184692595" watchObservedRunningTime="2026-01-28 15:03:45.415790256 +0000 UTC m=+143.189405294" Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.440140 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-n2td9" podStartSLOduration=123.440120933 podStartE2EDuration="2m3.440120933s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:45.439047544 +0000 UTC m=+143.212662592" watchObservedRunningTime="2026-01-28 15:03:45.440120933 +0000 UTC m=+143.213735961" Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.448423 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:45 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:45 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:45 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.448859 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.461588 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5cr6t" podStartSLOduration=123.461566731 podStartE2EDuration="2m3.461566731s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:45.460002807 +0000 UTC m=+143.233617845" watchObservedRunningTime="2026-01-28 15:03:45.461566731 +0000 UTC m=+143.235181779" Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.500840 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4" podStartSLOduration=122.500823436 podStartE2EDuration="2m2.500823436s" podCreationTimestamp="2026-01-28 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:45.500308702 +0000 UTC m=+143.273923740" watchObservedRunningTime="2026-01-28 15:03:45.500823436 +0000 UTC m=+143.274438464" Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.512019 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:45 crc kubenswrapper[4893]: E0128 15:03:45.513273 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.013252916 +0000 UTC m=+143.786867954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.513559 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:45 crc kubenswrapper[4893]: E0128 15:03:45.514086 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.014070778 +0000 UTC m=+143.787685806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.578987 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mvn69" podStartSLOduration=123.578969007 podStartE2EDuration="2m3.578969007s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:45.576044827 +0000 UTC m=+143.349659855" watchObservedRunningTime="2026-01-28 15:03:45.578969007 +0000 UTC m=+143.352584035" Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.580960 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-d5fwk" podStartSLOduration=10.580948391 podStartE2EDuration="10.580948391s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:45.541936802 +0000 UTC m=+143.315551830" watchObservedRunningTime="2026-01-28 15:03:45.580948391 +0000 UTC m=+143.354563429" Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.617243 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:45 crc kubenswrapper[4893]: E0128 15:03:45.617706 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.117676037 +0000 UTC m=+143.891291065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.617814 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:45 crc kubenswrapper[4893]: E0128 15:03:45.618365 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.118347646 +0000 UTC m=+143.891962674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.719107 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:45 crc kubenswrapper[4893]: E0128 15:03:45.719778 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.219728093 +0000 UTC m=+143.993343171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.826366 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:45 crc kubenswrapper[4893]: E0128 15:03:45.826702 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.326690174 +0000 UTC m=+144.100305202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.927812 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:45 crc kubenswrapper[4893]: E0128 15:03:45.928082 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.428061181 +0000 UTC m=+144.201676209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:45 crc kubenswrapper[4893]: I0128 15:03:45.928603 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:45 crc kubenswrapper[4893]: E0128 15:03:45.929266 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.429247934 +0000 UTC m=+144.202862962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.029620 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.029802 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.529772488 +0000 UTC m=+144.303387516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.030742 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.031265 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.531248228 +0000 UTC m=+144.304863266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.132051 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.132174 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.632150133 +0000 UTC m=+144.405765181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.132230 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.132621 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.632613055 +0000 UTC m=+144.406228083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.234130 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.234373 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.734336713 +0000 UTC m=+144.507951741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.234756 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.235241 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.735223297 +0000 UTC m=+144.508838405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.336032 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.336337 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.836321067 +0000 UTC m=+144.609936095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.363873 4893 generic.go:334] "Generic (PLEG): container finished" podID="70e61761-82dd-4ac8-a847-1727769f4424" containerID="6bdc499c7e005d1c8dcb20fc5a067717620c7df8396b4fbbf84d56ca8f3e40b6" exitCode=0 Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.363985 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" event={"ID":"70e61761-82dd-4ac8-a847-1727769f4424","Type":"ContainerDied","Data":"6bdc499c7e005d1c8dcb20fc5a067717620c7df8396b4fbbf84d56ca8f3e40b6"} Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.366118 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" event={"ID":"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9","Type":"ContainerStarted","Data":"ec038bb1ff6c920400da839dee4f63decee92be0d4f1bf14f50a4204aeb034ef"} Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.369101 4893 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6wtc2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.369159 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" podUID="cc9b874e-9d92-4b60-affa-24d0f2286cb8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.383519 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9mpt" Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.386991 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-28 14:58:45 +0000 UTC, rotation deadline is 2026-10-11 03:02:57.745647199 +0000 UTC Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.387019 4893 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6131h59m11.358630924s for next certificate rotation Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.437564 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.439244 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:46.939225936 +0000 UTC m=+144.712841064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.441647 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:46 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:46 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:46 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.441690 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.447926 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vh5rz" podStartSLOduration=124.447910114 podStartE2EDuration="2m4.447910114s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:45.602301876 +0000 UTC m=+143.375916904" watchObservedRunningTime="2026-01-28 15:03:46.447910114 +0000 UTC m=+144.221525142" Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.539005 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.539230 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.039186524 +0000 UTC m=+144.812801552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.539898 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.540296 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.040282975 +0000 UTC m=+144.813898073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.641248 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.641485 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.141442806 +0000 UTC m=+144.915057834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.641544 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.641930 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.141916499 +0000 UTC m=+144.915531527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.742419 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.742631 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.242591087 +0000 UTC m=+145.016206115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.742962 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.743354 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.243340878 +0000 UTC m=+145.016955906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.844420 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.844656 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.344627162 +0000 UTC m=+145.118242190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.844749 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.845136 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.345124057 +0000 UTC m=+145.118739185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.946565 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.946713 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.446695369 +0000 UTC m=+145.220310397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.946889 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:46 crc kubenswrapper[4893]: E0128 15:03:46.947214 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.447203963 +0000 UTC m=+145.220818981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.971605 4893 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6wtc2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.971664 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" podUID="cc9b874e-9d92-4b60-affa-24d0f2286cb8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.971685 4893 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6wtc2 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 28 15:03:46 crc kubenswrapper[4893]: I0128 15:03:46.971739 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" podUID="cc9b874e-9d92-4b60-affa-24d0f2286cb8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.048408 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.048657 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.548632532 +0000 UTC m=+145.322247570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.048714 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.049030 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.549020812 +0000 UTC m=+145.322635840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.150001 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.150106 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.650091001 +0000 UTC m=+145.423706029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.150536 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.151097 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.651086639 +0000 UTC m=+145.424701667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.251774 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.252004 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.751966863 +0000 UTC m=+145.525581891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.252143 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.252520 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.752512078 +0000 UTC m=+145.526127106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.353320 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.353781 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.853764902 +0000 UTC m=+145.627379930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.441807 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:47 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:47 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:47 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.441900 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.456146 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.456528 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:47.956508327 +0000 UTC m=+145.730123355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.556399 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5gtgr"] Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.557762 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.560201 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.560764 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:48.060500096 +0000 UTC m=+145.834115124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.561309 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.564238 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.564663 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:48.064650499 +0000 UTC m=+145.838265527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.581570 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5gtgr"] Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.588366 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.588451 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.588551 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.588432 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.671105 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.671523 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:48.171465487 +0000 UTC m=+145.945080515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.671639 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdjz\" (UniqueName: \"kubernetes.io/projected/43843abc-ea99-476a-81c0-76d6530f7c75-kube-api-access-7zdjz\") pod \"community-operators-5gtgr\" (UID: \"43843abc-ea99-476a-81c0-76d6530f7c75\") " pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.671720 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43843abc-ea99-476a-81c0-76d6530f7c75-catalog-content\") pod \"community-operators-5gtgr\" (UID: \"43843abc-ea99-476a-81c0-76d6530f7c75\") " pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.671763 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.672103 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43843abc-ea99-476a-81c0-76d6530f7c75-utilities\") pod \"community-operators-5gtgr\" (UID: \"43843abc-ea99-476a-81c0-76d6530f7c75\") " pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.672580 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:48.172572337 +0000 UTC m=+145.946187365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.695033 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8jcmm" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.706357 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.706424 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.713165 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2zfnn" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.724140 4893 patch_prober.go:28] interesting pod/console-f9d7485db-vzxzx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.724212 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vzxzx" podUID="7d249efd-e40b-430f-98ec-9ad9c4e5cf70" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.774252 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-46wz5"] Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.776881 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.785344 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.815643 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.816066 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zdjz\" (UniqueName: \"kubernetes.io/projected/43843abc-ea99-476a-81c0-76d6530f7c75-kube-api-access-7zdjz\") pod \"community-operators-5gtgr\" (UID: \"43843abc-ea99-476a-81c0-76d6530f7c75\") " pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.816108 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43843abc-ea99-476a-81c0-76d6530f7c75-catalog-content\") pod \"community-operators-5gtgr\" (UID: \"43843abc-ea99-476a-81c0-76d6530f7c75\") " pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.816300 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43843abc-ea99-476a-81c0-76d6530f7c75-utilities\") pod \"community-operators-5gtgr\" (UID: \"43843abc-ea99-476a-81c0-76d6530f7c75\") " pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.818251 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:48.318221317 +0000 UTC m=+146.091836345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.820236 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43843abc-ea99-476a-81c0-76d6530f7c75-catalog-content\") pod \"community-operators-5gtgr\" (UID: \"43843abc-ea99-476a-81c0-76d6530f7c75\") " pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.826866 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43843abc-ea99-476a-81c0-76d6530f7c75-utilities\") pod \"community-operators-5gtgr\" (UID: \"43843abc-ea99-476a-81c0-76d6530f7c75\") " pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.844561 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46wz5"] Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.883310 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.887963 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.915161 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.918374 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace4b0ad-d8d3-48aa-8635-6e6e96030672-catalog-content\") pod \"certified-operators-46wz5\" (UID: \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\") " pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.918461 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh9zt\" (UniqueName: \"kubernetes.io/projected/ace4b0ad-d8d3-48aa-8635-6e6e96030672-kube-api-access-hh9zt\") pod \"certified-operators-46wz5\" (UID: \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\") " pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.918583 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace4b0ad-d8d3-48aa-8635-6e6e96030672-utilities\") pod \"certified-operators-46wz5\" (UID: \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\") " pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.918629 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.919075 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:48.4190572 +0000 UTC m=+146.192672318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.925425 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zdjz\" (UniqueName: \"kubernetes.io/projected/43843abc-ea99-476a-81c0-76d6530f7c75-kube-api-access-7zdjz\") pod \"community-operators-5gtgr\" (UID: \"43843abc-ea99-476a-81c0-76d6530f7c75\") " pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.940068 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.940117 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.966184 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.966219 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.980003 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s2wp6"] Jan 28 15:03:47 crc kubenswrapper[4893]: E0128 15:03:47.980664 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e61761-82dd-4ac8-a847-1727769f4424" containerName="collect-profiles" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.980685 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e61761-82dd-4ac8-a847-1727769f4424" containerName="collect-profiles" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.980837 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e61761-82dd-4ac8-a847-1727769f4424" containerName="collect-profiles" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.981699 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.987761 4893 patch_prober.go:28] interesting pod/apiserver-76f77b778f-vd8ml container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 28 15:03:47 crc kubenswrapper[4893]: I0128 15:03:47.987817 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" podUID="5ea57229-2fa9-47b3-a2f1-6c28d9434923" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.008127 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2wp6"] Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.019485 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70e61761-82dd-4ac8-a847-1727769f4424-config-volume\") pod \"70e61761-82dd-4ac8-a847-1727769f4424\" (UID: \"70e61761-82dd-4ac8-a847-1727769f4424\") " Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.019538 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70e61761-82dd-4ac8-a847-1727769f4424-secret-volume\") pod \"70e61761-82dd-4ac8-a847-1727769f4424\" (UID: \"70e61761-82dd-4ac8-a847-1727769f4424\") " Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.019565 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn7vn\" (UniqueName: \"kubernetes.io/projected/70e61761-82dd-4ac8-a847-1727769f4424-kube-api-access-nn7vn\") pod \"70e61761-82dd-4ac8-a847-1727769f4424\" (UID: \"70e61761-82dd-4ac8-a847-1727769f4424\") " Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.019681 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.019843 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace4b0ad-d8d3-48aa-8635-6e6e96030672-utilities\") pod \"certified-operators-46wz5\" (UID: \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\") " pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.019965 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace4b0ad-d8d3-48aa-8635-6e6e96030672-catalog-content\") pod \"certified-operators-46wz5\" (UID: \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\") " pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.020027 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh9zt\" (UniqueName: \"kubernetes.io/projected/ace4b0ad-d8d3-48aa-8635-6e6e96030672-kube-api-access-hh9zt\") pod \"certified-operators-46wz5\" (UID: \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\") " pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.021304 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e61761-82dd-4ac8-a847-1727769f4424-config-volume" (OuterVolumeSpecName: "config-volume") pod "70e61761-82dd-4ac8-a847-1727769f4424" (UID: "70e61761-82dd-4ac8-a847-1727769f4424"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.026062 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace4b0ad-d8d3-48aa-8635-6e6e96030672-catalog-content\") pod \"certified-operators-46wz5\" (UID: \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\") " pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:03:48 crc kubenswrapper[4893]: E0128 15:03:48.028084 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:48.528067916 +0000 UTC m=+146.301682944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.030215 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e61761-82dd-4ac8-a847-1727769f4424-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "70e61761-82dd-4ac8-a847-1727769f4424" (UID: "70e61761-82dd-4ac8-a847-1727769f4424"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.040619 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace4b0ad-d8d3-48aa-8635-6e6e96030672-utilities\") pod \"certified-operators-46wz5\" (UID: \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\") " pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.069535 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e61761-82dd-4ac8-a847-1727769f4424-kube-api-access-nn7vn" (OuterVolumeSpecName: "kube-api-access-nn7vn") pod "70e61761-82dd-4ac8-a847-1727769f4424" (UID: "70e61761-82dd-4ac8-a847-1727769f4424"). InnerVolumeSpecName "kube-api-access-nn7vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.080422 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh9zt\" (UniqueName: \"kubernetes.io/projected/ace4b0ad-d8d3-48aa-8635-6e6e96030672-kube-api-access-hh9zt\") pod \"certified-operators-46wz5\" (UID: \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\") " pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.099107 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.110290 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.121223 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjqm8\" (UniqueName: \"kubernetes.io/projected/c1d61ecd-2c35-4e84-85db-9ebe350850a6-kube-api-access-fjqm8\") pod \"community-operators-s2wp6\" (UID: \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\") " pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.121575 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.122787 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d61ecd-2c35-4e84-85db-9ebe350850a6-catalog-content\") pod \"community-operators-s2wp6\" (UID: \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\") " pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.123015 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d61ecd-2c35-4e84-85db-9ebe350850a6-utilities\") pod \"community-operators-s2wp6\" (UID: \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\") " pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:03:48 crc kubenswrapper[4893]: E0128 15:03:48.126127 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:48.626107762 +0000 UTC m=+146.399722790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.127454 4893 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70e61761-82dd-4ac8-a847-1727769f4424-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.127544 4893 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70e61761-82dd-4ac8-a847-1727769f4424-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.127607 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn7vn\" (UniqueName: \"kubernetes.io/projected/70e61761-82dd-4ac8-a847-1727769f4424-kube-api-access-nn7vn\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.162440 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-77mgk"] Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.173018 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.175748 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.199835 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-77mgk"] Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.202060 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkj4d" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.241555 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:48 crc kubenswrapper[4893]: E0128 15:03:48.242110 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:48.74208121 +0000 UTC m=+146.515696238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.242515 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d61ecd-2c35-4e84-85db-9ebe350850a6-catalog-content\") pod \"community-operators-s2wp6\" (UID: \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\") " pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.242715 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d61ecd-2c35-4e84-85db-9ebe350850a6-utilities\") pod \"community-operators-s2wp6\" (UID: \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\") " pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.242879 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjqm8\" (UniqueName: \"kubernetes.io/projected/c1d61ecd-2c35-4e84-85db-9ebe350850a6-kube-api-access-fjqm8\") pod \"community-operators-s2wp6\" (UID: \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\") " pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.243004 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:48 crc kubenswrapper[4893]: E0128 15:03:48.244503 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:48.744492596 +0000 UTC m=+146.518107624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.244922 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d61ecd-2c35-4e84-85db-9ebe350850a6-utilities\") pod \"community-operators-s2wp6\" (UID: \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\") " pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.250115 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d61ecd-2c35-4e84-85db-9ebe350850a6-catalog-content\") pod \"community-operators-s2wp6\" (UID: \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\") " pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.278425 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjqm8\" (UniqueName: \"kubernetes.io/projected/c1d61ecd-2c35-4e84-85db-9ebe350850a6-kube-api-access-fjqm8\") pod \"community-operators-s2wp6\" (UID: \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\") " pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.290861 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.317151 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.344264 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:48 crc kubenswrapper[4893]: E0128 15:03:48.344530 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:48.844460255 +0000 UTC m=+146.618075283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.345293 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:48 crc kubenswrapper[4893]: E0128 15:03:48.346824 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:48.84681555 +0000 UTC m=+146.620430578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.345557 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9efa33f-313e-484f-967c-1d829b6f8250-catalog-content\") pod \"certified-operators-77mgk\" (UID: \"f9efa33f-313e-484f-967c-1d829b6f8250\") " pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.364866 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shckb\" (UniqueName: \"kubernetes.io/projected/f9efa33f-313e-484f-967c-1d829b6f8250-kube-api-access-shckb\") pod \"certified-operators-77mgk\" (UID: \"f9efa33f-313e-484f-967c-1d829b6f8250\") " pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.364979 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9efa33f-313e-484f-967c-1d829b6f8250-utilities\") pod \"certified-operators-77mgk\" (UID: \"f9efa33f-313e-484f-967c-1d829b6f8250\") " pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.415814 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.438176 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.468325 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.468611 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shckb\" (UniqueName: \"kubernetes.io/projected/f9efa33f-313e-484f-967c-1d829b6f8250-kube-api-access-shckb\") pod \"certified-operators-77mgk\" (UID: \"f9efa33f-313e-484f-967c-1d829b6f8250\") " pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.468646 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9efa33f-313e-484f-967c-1d829b6f8250-utilities\") pod \"certified-operators-77mgk\" (UID: \"f9efa33f-313e-484f-967c-1d829b6f8250\") " pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.468699 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9efa33f-313e-484f-967c-1d829b6f8250-catalog-content\") pod \"certified-operators-77mgk\" (UID: \"f9efa33f-313e-484f-967c-1d829b6f8250\") " pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.469110 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9efa33f-313e-484f-967c-1d829b6f8250-catalog-content\") pod \"certified-operators-77mgk\" (UID: \"f9efa33f-313e-484f-967c-1d829b6f8250\") " pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:03:48 crc kubenswrapper[4893]: E0128 15:03:48.469188 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:48.969170881 +0000 UTC m=+146.742785899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.469635 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9efa33f-313e-484f-967c-1d829b6f8250-utilities\") pod \"certified-operators-77mgk\" (UID: \"f9efa33f-313e-484f-967c-1d829b6f8250\") " pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.471989 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.473171 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx" event={"ID":"70e61761-82dd-4ac8-a847-1727769f4424","Type":"ContainerDied","Data":"85847cf247ea4e6ecb8cfe39126c16c4e0fb9c31d4d3a227f31c4b6472b4dcdc"} Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.473249 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85847cf247ea4e6ecb8cfe39126c16c4e0fb9c31d4d3a227f31c4b6472b4dcdc" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.512945 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf8nw" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.544742 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shckb\" (UniqueName: \"kubernetes.io/projected/f9efa33f-313e-484f-967c-1d829b6f8250-kube-api-access-shckb\") pod \"certified-operators-77mgk\" (UID: \"f9efa33f-313e-484f-967c-1d829b6f8250\") " pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.564005 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:48 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:48 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:48 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.564830 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.570046 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:48 crc kubenswrapper[4893]: E0128 15:03:48.573375 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:49.073356486 +0000 UTC m=+146.846971514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.681552 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:48 crc kubenswrapper[4893]: E0128 15:03:48.713586 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:49.187680698 +0000 UTC m=+146.961295726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.720162 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.721954 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.756741 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.769586 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.776798 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.787120 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:48 crc kubenswrapper[4893]: E0128 15:03:48.787759 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:49.28774167 +0000 UTC m=+147.061356698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.814099 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.828125 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5gtgr"] Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.889194 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.889736 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39c7e6c1-520f-45b8-8d19-0d77b6853f7c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"39c7e6c1-520f-45b8-8d19-0d77b6853f7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.889825 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39c7e6c1-520f-45b8-8d19-0d77b6853f7c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"39c7e6c1-520f-45b8-8d19-0d77b6853f7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:03:48 crc kubenswrapper[4893]: E0128 15:03:48.889990 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:49.389973571 +0000 UTC m=+147.163588599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.992276 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39c7e6c1-520f-45b8-8d19-0d77b6853f7c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"39c7e6c1-520f-45b8-8d19-0d77b6853f7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.992375 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.992432 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39c7e6c1-520f-45b8-8d19-0d77b6853f7c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"39c7e6c1-520f-45b8-8d19-0d77b6853f7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:03:48 crc kubenswrapper[4893]: I0128 15:03:48.992522 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39c7e6c1-520f-45b8-8d19-0d77b6853f7c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"39c7e6c1-520f-45b8-8d19-0d77b6853f7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:03:48 crc kubenswrapper[4893]: E0128 15:03:48.993099 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:49.493088505 +0000 UTC m=+147.266703533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.039963 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39c7e6c1-520f-45b8-8d19-0d77b6853f7c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"39c7e6c1-520f-45b8-8d19-0d77b6853f7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.095085 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:49 crc kubenswrapper[4893]: E0128 15:03:49.095676 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:49.595655005 +0000 UTC m=+147.369270033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.125619 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.202384 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:49 crc kubenswrapper[4893]: E0128 15:03:49.203056 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:49.703014807 +0000 UTC m=+147.476629905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.276126 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s2wp6"] Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.282773 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46wz5"] Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.311239 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:49 crc kubenswrapper[4893]: E0128 15:03:49.311982 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:49.811964282 +0000 UTC m=+147.585579310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.348079 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-77mgk"] Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.463727 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:49 crc kubenswrapper[4893]: E0128 15:03:49.464132 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:49.964118371 +0000 UTC m=+147.737733399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.477762 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:49 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:49 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:49 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.478066 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.484939 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46wz5" event={"ID":"ace4b0ad-d8d3-48aa-8635-6e6e96030672","Type":"ContainerStarted","Data":"4f5bec911cd7e3607988a942da1f4aff96577b0cdbb4bdf24a23c17ce4e054e2"} Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.486390 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gtgr" event={"ID":"43843abc-ea99-476a-81c0-76d6530f7c75","Type":"ContainerStarted","Data":"cb82e0bcca4a4bbc800edf029648f91c6ab03fa68bc631ff1b6abb90cde31028"} Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.492137 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77mgk" event={"ID":"f9efa33f-313e-484f-967c-1d829b6f8250","Type":"ContainerStarted","Data":"135816f32d58633a9545c028da79b2b096b4458795d53b6ccb6080f4ba4d2db6"} Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.506869 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" event={"ID":"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9","Type":"ContainerStarted","Data":"e71d5d6d695e81f0998389cbda54a60117a5d49f0239ac8fab09234992743eab"} Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.517604 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2wp6" event={"ID":"c1d61ecd-2c35-4e84-85db-9ebe350850a6","Type":"ContainerStarted","Data":"dce22a04cc5113a4aaa6557eb3f041d22128a9098460fad94fb9791307740f92"} Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.570291 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:49 crc kubenswrapper[4893]: E0128 15:03:49.570804 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.070788473 +0000 UTC m=+147.844403501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.576250 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:03:49 crc kubenswrapper[4893]: W0128 15:03:49.589906 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod39c7e6c1_520f_45b8_8d19_0d77b6853f7c.slice/crio-25e2578aa1bf358f0a2f768e29e7db77fb65192f27b4fdbf1bf4d5613f0b5e8c WatchSource:0}: Error finding container 25e2578aa1bf358f0a2f768e29e7db77fb65192f27b4fdbf1bf4d5613f0b5e8c: Status 404 returned error can't find the container with id 25e2578aa1bf358f0a2f768e29e7db77fb65192f27b4fdbf1bf4d5613f0b5e8c Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.672690 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:49 crc kubenswrapper[4893]: E0128 15:03:49.673773 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.173754143 +0000 UTC m=+147.947369271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.744331 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g675f"] Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.745370 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.747257 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.759927 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g675f"] Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.776132 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:49 crc kubenswrapper[4893]: E0128 15:03:49.776321 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.276284133 +0000 UTC m=+148.049899161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.776546 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8vtw\" (UniqueName: \"kubernetes.io/projected/94f2541b-4f69-4bbc-9388-c040e53d85a0-kube-api-access-c8vtw\") pod \"redhat-marketplace-g675f\" (UID: \"94f2541b-4f69-4bbc-9388-c040e53d85a0\") " pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.776594 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.776625 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f2541b-4f69-4bbc-9388-c040e53d85a0-catalog-content\") pod \"redhat-marketplace-g675f\" (UID: \"94f2541b-4f69-4bbc-9388-c040e53d85a0\") " pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.776647 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f2541b-4f69-4bbc-9388-c040e53d85a0-utilities\") pod \"redhat-marketplace-g675f\" (UID: \"94f2541b-4f69-4bbc-9388-c040e53d85a0\") " pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:03:49 crc kubenswrapper[4893]: E0128 15:03:49.777053 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.277038393 +0000 UTC m=+148.050653421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.878226 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:49 crc kubenswrapper[4893]: E0128 15:03:49.878384 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.378357739 +0000 UTC m=+148.151972767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.878547 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8vtw\" (UniqueName: \"kubernetes.io/projected/94f2541b-4f69-4bbc-9388-c040e53d85a0-kube-api-access-c8vtw\") pod \"redhat-marketplace-g675f\" (UID: \"94f2541b-4f69-4bbc-9388-c040e53d85a0\") " pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.878609 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.878667 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f2541b-4f69-4bbc-9388-c040e53d85a0-catalog-content\") pod \"redhat-marketplace-g675f\" (UID: \"94f2541b-4f69-4bbc-9388-c040e53d85a0\") " pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.878701 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f2541b-4f69-4bbc-9388-c040e53d85a0-utilities\") pod \"redhat-marketplace-g675f\" (UID: \"94f2541b-4f69-4bbc-9388-c040e53d85a0\") " pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:03:49 crc kubenswrapper[4893]: E0128 15:03:49.879056 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.379035748 +0000 UTC m=+148.152650766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.879218 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f2541b-4f69-4bbc-9388-c040e53d85a0-utilities\") pod \"redhat-marketplace-g675f\" (UID: \"94f2541b-4f69-4bbc-9388-c040e53d85a0\") " pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.879301 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f2541b-4f69-4bbc-9388-c040e53d85a0-catalog-content\") pod \"redhat-marketplace-g675f\" (UID: \"94f2541b-4f69-4bbc-9388-c040e53d85a0\") " pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.901380 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8vtw\" (UniqueName: \"kubernetes.io/projected/94f2541b-4f69-4bbc-9388-c040e53d85a0-kube-api-access-c8vtw\") pod \"redhat-marketplace-g675f\" (UID: \"94f2541b-4f69-4bbc-9388-c040e53d85a0\") " pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.976579 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6wtc2" Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.979314 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:49 crc kubenswrapper[4893]: E0128 15:03:49.979530 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.47950886 +0000 UTC m=+148.253123888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:49 crc kubenswrapper[4893]: I0128 15:03:49.979812 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:49 crc kubenswrapper[4893]: E0128 15:03:49.980214 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.48020005 +0000 UTC m=+148.253815078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.064405 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.081195 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:50 crc kubenswrapper[4893]: E0128 15:03:50.081431 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.581395912 +0000 UTC m=+148.355010950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.081638 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:50 crc kubenswrapper[4893]: E0128 15:03:50.082141 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.582120442 +0000 UTC m=+148.355735470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.146643 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6rlkf"] Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.148121 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.167860 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rlkf"] Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.184922 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.185209 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b19fd77-6353-4456-afd3-00dc264d614e-utilities\") pod \"redhat-marketplace-6rlkf\" (UID: \"8b19fd77-6353-4456-afd3-00dc264d614e\") " pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.185253 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b19fd77-6353-4456-afd3-00dc264d614e-catalog-content\") pod \"redhat-marketplace-6rlkf\" (UID: \"8b19fd77-6353-4456-afd3-00dc264d614e\") " pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.185316 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncp5l\" (UniqueName: \"kubernetes.io/projected/8b19fd77-6353-4456-afd3-00dc264d614e-kube-api-access-ncp5l\") pod \"redhat-marketplace-6rlkf\" (UID: \"8b19fd77-6353-4456-afd3-00dc264d614e\") " pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:03:50 crc kubenswrapper[4893]: E0128 15:03:50.185459 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.685437393 +0000 UTC m=+148.459052421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.289291 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b19fd77-6353-4456-afd3-00dc264d614e-utilities\") pod \"redhat-marketplace-6rlkf\" (UID: \"8b19fd77-6353-4456-afd3-00dc264d614e\") " pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.289378 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b19fd77-6353-4456-afd3-00dc264d614e-catalog-content\") pod \"redhat-marketplace-6rlkf\" (UID: \"8b19fd77-6353-4456-afd3-00dc264d614e\") " pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.289419 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.289452 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncp5l\" (UniqueName: \"kubernetes.io/projected/8b19fd77-6353-4456-afd3-00dc264d614e-kube-api-access-ncp5l\") pod \"redhat-marketplace-6rlkf\" (UID: \"8b19fd77-6353-4456-afd3-00dc264d614e\") " pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.290271 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b19fd77-6353-4456-afd3-00dc264d614e-utilities\") pod \"redhat-marketplace-6rlkf\" (UID: \"8b19fd77-6353-4456-afd3-00dc264d614e\") " pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.290523 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b19fd77-6353-4456-afd3-00dc264d614e-catalog-content\") pod \"redhat-marketplace-6rlkf\" (UID: \"8b19fd77-6353-4456-afd3-00dc264d614e\") " pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:03:50 crc kubenswrapper[4893]: E0128 15:03:50.290960 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.790945824 +0000 UTC m=+148.564560842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.334828 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncp5l\" (UniqueName: \"kubernetes.io/projected/8b19fd77-6353-4456-afd3-00dc264d614e-kube-api-access-ncp5l\") pod \"redhat-marketplace-6rlkf\" (UID: \"8b19fd77-6353-4456-afd3-00dc264d614e\") " pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.392562 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:50 crc kubenswrapper[4893]: E0128 15:03:50.393128 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.893110492 +0000 UTC m=+148.666725520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.444123 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:50 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:50 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:50 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.444205 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.470301 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.494204 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:50 crc kubenswrapper[4893]: E0128 15:03:50.494731 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.994707035 +0000 UTC m=+148.768322063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.530339 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"39c7e6c1-520f-45b8-8d19-0d77b6853f7c","Type":"ContainerStarted","Data":"25e2578aa1bf358f0a2f768e29e7db77fb65192f27b4fdbf1bf4d5613f0b5e8c"} Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.532540 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46wz5" event={"ID":"ace4b0ad-d8d3-48aa-8635-6e6e96030672","Type":"ContainerStarted","Data":"3d2641441d869cd8fe41b19194f6959a91453747b9b29287265990911342caa0"} Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.533718 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gtgr" event={"ID":"43843abc-ea99-476a-81c0-76d6530f7c75","Type":"ContainerStarted","Data":"833ad2f85b41d5b7ce33f205d41255823e4bb686a24e9fecbd675270ab27fc99"} Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.572867 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g675f"] Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.595409 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:50 crc kubenswrapper[4893]: E0128 15:03:50.595748 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:51.095727683 +0000 UTC m=+148.869342711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.697592 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:50 crc kubenswrapper[4893]: E0128 15:03:50.698644 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:51.198622442 +0000 UTC m=+148.972237470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:50 crc kubenswrapper[4893]: W0128 15:03:50.731329 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94f2541b_4f69_4bbc_9388_c040e53d85a0.slice/crio-5750af12fae08979bfaa99d6dd8251b234c145ee1445a7389126507fd1ae0aeb WatchSource:0}: Error finding container 5750af12fae08979bfaa99d6dd8251b234c145ee1445a7389126507fd1ae0aeb: Status 404 returned error can't find the container with id 5750af12fae08979bfaa99d6dd8251b234c145ee1445a7389126507fd1ae0aeb Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.744507 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nwlnm"] Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.745996 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.752942 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.763653 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nwlnm"] Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.765819 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rlkf"] Jan 28 15:03:50 crc kubenswrapper[4893]: W0128 15:03:50.798311 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b19fd77_6353_4456_afd3_00dc264d614e.slice/crio-81a9a90ecdcaf30632377b42d24c9cbe92049ecf8a790f495f96fdcd034e7790 WatchSource:0}: Error finding container 81a9a90ecdcaf30632377b42d24c9cbe92049ecf8a790f495f96fdcd034e7790: Status 404 returned error can't find the container with id 81a9a90ecdcaf30632377b42d24c9cbe92049ecf8a790f495f96fdcd034e7790 Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.798324 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:50 crc kubenswrapper[4893]: E0128 15:03:50.798632 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:51.298609392 +0000 UTC m=+149.072224430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.798750 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.798889 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-catalog-content\") pod \"redhat-operators-nwlnm\" (UID: \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\") " pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.798974 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-utilities\") pod \"redhat-operators-nwlnm\" (UID: \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\") " pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.799140 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:50 crc kubenswrapper[4893]: E0128 15:03:50.799200 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:51.299185278 +0000 UTC m=+149.072800306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.799329 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p26nl\" (UniqueName: \"kubernetes.io/projected/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-kube-api-access-p26nl\") pod \"redhat-operators-nwlnm\" (UID: \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\") " pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.799438 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.799546 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.799653 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.804927 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.805191 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.807098 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.810807 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.813677 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.820922 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.834979 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.900640 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:50 crc kubenswrapper[4893]: E0128 15:03:50.900799 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:51.400777271 +0000 UTC m=+149.174392299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.901820 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.901935 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-catalog-content\") pod \"redhat-operators-nwlnm\" (UID: \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\") " pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.902017 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-utilities\") pod \"redhat-operators-nwlnm\" (UID: \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\") " pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.902133 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p26nl\" (UniqueName: \"kubernetes.io/projected/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-kube-api-access-p26nl\") pod \"redhat-operators-nwlnm\" (UID: \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\") " pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:03:50 crc kubenswrapper[4893]: E0128 15:03:50.902321 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:51.402294593 +0000 UTC m=+149.175909641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.902442 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-catalog-content\") pod \"redhat-operators-nwlnm\" (UID: \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\") " pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.902538 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-utilities\") pod \"redhat-operators-nwlnm\" (UID: \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\") " pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:03:50 crc kubenswrapper[4893]: I0128 15:03:50.922603 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p26nl\" (UniqueName: \"kubernetes.io/projected/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-kube-api-access-p26nl\") pod \"redhat-operators-nwlnm\" (UID: \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\") " pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.003458 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:51 crc kubenswrapper[4893]: E0128 15:03:51.003837 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:51.503820694 +0000 UTC m=+149.277435722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.078181 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.106827 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:51 crc kubenswrapper[4893]: E0128 15:03:51.107226 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:51.607209177 +0000 UTC m=+149.380824205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.148236 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mtslh"] Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.149721 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.163188 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mtslh"] Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.212246 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.212492 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll2rd\" (UniqueName: \"kubernetes.io/projected/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-kube-api-access-ll2rd\") pod \"redhat-operators-mtslh\" (UID: \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\") " pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.212520 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-utilities\") pod \"redhat-operators-mtslh\" (UID: \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\") " pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.212577 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-catalog-content\") pod \"redhat-operators-mtslh\" (UID: \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\") " pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:03:51 crc kubenswrapper[4893]: E0128 15:03:51.212701 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:51.712684547 +0000 UTC m=+149.486299575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.313238 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.313290 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-catalog-content\") pod \"redhat-operators-mtslh\" (UID: \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\") " pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.313375 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll2rd\" (UniqueName: \"kubernetes.io/projected/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-kube-api-access-ll2rd\") pod \"redhat-operators-mtslh\" (UID: \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\") " pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.313396 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-utilities\") pod \"redhat-operators-mtslh\" (UID: \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\") " pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.314166 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-utilities\") pod \"redhat-operators-mtslh\" (UID: \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\") " pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.314732 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-catalog-content\") pod \"redhat-operators-mtslh\" (UID: \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\") " pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:03:51 crc kubenswrapper[4893]: E0128 15:03:51.315020 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:51.81500554 +0000 UTC m=+149.588620568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.336316 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll2rd\" (UniqueName: \"kubernetes.io/projected/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-kube-api-access-ll2rd\") pod \"redhat-operators-mtslh\" (UID: \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\") " pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.415055 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:51 crc kubenswrapper[4893]: E0128 15:03:51.415568 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:51.915522194 +0000 UTC m=+149.689137212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.441814 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:51 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:51 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:51 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.441886 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.468643 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nwlnm"] Jan 28 15:03:51 crc kubenswrapper[4893]: W0128 15:03:51.478183 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf69fe16f_cdc0_4aa4_aec1_2dd915eed2d2.slice/crio-a5ae9372da2036ec65fe74526c8ad1dceb2814422cb946d02303d9939b438f10 WatchSource:0}: Error finding container a5ae9372da2036ec65fe74526c8ad1dceb2814422cb946d02303d9939b438f10: Status 404 returned error can't find the container with id a5ae9372da2036ec65fe74526c8ad1dceb2814422cb946d02303d9939b438f10 Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.491278 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.517607 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:51 crc kubenswrapper[4893]: E0128 15:03:51.518029 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:52.018016412 +0000 UTC m=+149.791631440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.540515 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"90d4dd13f87bf7dc0fa2dce6edd9c156b73e7587a1001d3530f0ea71c56544ae"} Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.543200 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"39c7e6c1-520f-45b8-8d19-0d77b6853f7c","Type":"ContainerStarted","Data":"0d8e39b8edf8cda3ca7dcfd9c70a5feed17143197a1e4f81f301727536291b20"} Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.545359 4893 generic.go:334] "Generic (PLEG): container finished" podID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" containerID="3d2641441d869cd8fe41b19194f6959a91453747b9b29287265990911342caa0" exitCode=0 Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.545486 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46wz5" event={"ID":"ace4b0ad-d8d3-48aa-8635-6e6e96030672","Type":"ContainerDied","Data":"3d2641441d869cd8fe41b19194f6959a91453747b9b29287265990911342caa0"} Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.547721 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77mgk" event={"ID":"f9efa33f-313e-484f-967c-1d829b6f8250","Type":"ContainerStarted","Data":"f4be2056952fd7894303c984968317ba819ffc48c1263fb5e8d78d024dcf4a79"} Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.549682 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.550030 4893 generic.go:334] "Generic (PLEG): container finished" podID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" containerID="17e2f7a97d4ce620fadc3a513acd44774acbc6c71ae39715aec815803a69046d" exitCode=0 Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.550088 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2wp6" event={"ID":"c1d61ecd-2c35-4e84-85db-9ebe350850a6","Type":"ContainerDied","Data":"17e2f7a97d4ce620fadc3a513acd44774acbc6c71ae39715aec815803a69046d"} Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.559642 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwlnm" event={"ID":"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2","Type":"ContainerStarted","Data":"a5ae9372da2036ec65fe74526c8ad1dceb2814422cb946d02303d9939b438f10"} Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.563329 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g675f" event={"ID":"94f2541b-4f69-4bbc-9388-c040e53d85a0","Type":"ContainerStarted","Data":"5750af12fae08979bfaa99d6dd8251b234c145ee1445a7389126507fd1ae0aeb"} Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.574439 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rlkf" event={"ID":"8b19fd77-6353-4456-afd3-00dc264d614e","Type":"ContainerStarted","Data":"81a9a90ecdcaf30632377b42d24c9cbe92049ecf8a790f495f96fdcd034e7790"} Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.578824 4893 generic.go:334] "Generic (PLEG): container finished" podID="43843abc-ea99-476a-81c0-76d6530f7c75" containerID="833ad2f85b41d5b7ce33f205d41255823e4bb686a24e9fecbd675270ab27fc99" exitCode=0 Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.578867 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gtgr" event={"ID":"43843abc-ea99-476a-81c0-76d6530f7c75","Type":"ContainerDied","Data":"833ad2f85b41d5b7ce33f205d41255823e4bb686a24e9fecbd675270ab27fc99"} Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.622408 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:51 crc kubenswrapper[4893]: E0128 15:03:51.623487 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:52.12343913 +0000 UTC m=+149.897054158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.726138 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:51 crc kubenswrapper[4893]: E0128 15:03:51.726717 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:52.226700149 +0000 UTC m=+150.000315177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.826864 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:51 crc kubenswrapper[4893]: E0128 15:03:51.827621 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:52.327604824 +0000 UTC m=+150.101219852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.929137 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:51 crc kubenswrapper[4893]: E0128 15:03:51.929629 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:03:52.429612079 +0000 UTC m=+150.203227107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g2dcn" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:51 crc kubenswrapper[4893]: I0128 15:03:51.954412 4893 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.030002 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:52 crc kubenswrapper[4893]: E0128 15:03:52.030447 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:52.530416241 +0000 UTC m=+150.304031269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.064550 4893 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-28T15:03:51.954453599Z","Handler":null,"Name":""} Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.072263 4893 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.072309 4893 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.133259 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.142673 4893 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.142723 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.145334 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mtslh"] Jan 28 15:03:52 crc kubenswrapper[4893]: W0128 15:03:52.220751 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c2ed13a_5aec_42ad_80a0_1ee315e4fb12.slice/crio-fc599c836f27b10ed49b9c866cb202f62b879aafe766ee6004b6a7667c2425ed WatchSource:0}: Error finding container fc599c836f27b10ed49b9c866cb202f62b879aafe766ee6004b6a7667c2425ed: Status 404 returned error can't find the container with id fc599c836f27b10ed49b9c866cb202f62b879aafe766ee6004b6a7667c2425ed Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.336941 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g2dcn\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.403267 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.437089 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.439659 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:52 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:52 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:52 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.439711 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.443327 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.587641 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtslh" event={"ID":"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12","Type":"ContainerStarted","Data":"fc599c836f27b10ed49b9c866cb202f62b879aafe766ee6004b6a7667c2425ed"} Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.590372 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" event={"ID":"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9","Type":"ContainerStarted","Data":"23c420346cba769c9d7cdaccd869d84697802fcd3883907e78f044993266c2b4"} Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.592069 4893 generic.go:334] "Generic (PLEG): container finished" podID="39c7e6c1-520f-45b8-8d19-0d77b6853f7c" containerID="0d8e39b8edf8cda3ca7dcfd9c70a5feed17143197a1e4f81f301727536291b20" exitCode=0 Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.592140 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"39c7e6c1-520f-45b8-8d19-0d77b6853f7c","Type":"ContainerDied","Data":"0d8e39b8edf8cda3ca7dcfd9c70a5feed17143197a1e4f81f301727536291b20"} Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.594048 4893 generic.go:334] "Generic (PLEG): container finished" podID="94f2541b-4f69-4bbc-9388-c040e53d85a0" containerID="828b24fe12f1c2287840e215bb42155bae588e19c90ecff5f57058c760f49c6d" exitCode=0 Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.594137 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g675f" event={"ID":"94f2541b-4f69-4bbc-9388-c040e53d85a0","Type":"ContainerDied","Data":"828b24fe12f1c2287840e215bb42155bae588e19c90ecff5f57058c760f49c6d"} Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.601359 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2bd2bf6522433e06b9d8bb50ec19c2060ba9a3caa2220f4328f498eef113fbcc"} Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.601413 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"befa48368971c7b0f6da1c7791702c225550ee294c6a3d6491adc702b21b294d"} Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.601770 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g2dcn"] Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.604744 4893 generic.go:334] "Generic (PLEG): container finished" podID="8b19fd77-6353-4456-afd3-00dc264d614e" containerID="75e7d2f7c43a7f281adb0f68b262d0f34ab6a1ba5c50e7566c5d889a0b9abe95" exitCode=0 Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.604793 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rlkf" event={"ID":"8b19fd77-6353-4456-afd3-00dc264d614e","Type":"ContainerDied","Data":"75e7d2f7c43a7f281adb0f68b262d0f34ab6a1ba5c50e7566c5d889a0b9abe95"} Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.606354 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a262a0f945ce2e233630773f6f8a147312c2737f3dd282234213fffde529d35a"} Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.606386 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f9ff0427a715069512433e18bcdbb08fb7dcdd147a5f910ca04ee52640200237"} Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.607629 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e41972d65b3e3995436f4b0ced11f6dd997457ca4e5d95a0f1d390f548e53438"} Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.612519 4893 generic.go:334] "Generic (PLEG): container finished" podID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" containerID="634c36920bba0bb3ce14092e60fa8095188f989249b66dc959105714681bc709" exitCode=0 Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.612590 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwlnm" event={"ID":"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2","Type":"ContainerDied","Data":"634c36920bba0bb3ce14092e60fa8095188f989249b66dc959105714681bc709"} Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.617280 4893 generic.go:334] "Generic (PLEG): container finished" podID="f9efa33f-313e-484f-967c-1d829b6f8250" containerID="f4be2056952fd7894303c984968317ba819ffc48c1263fb5e8d78d024dcf4a79" exitCode=0 Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.617328 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77mgk" event={"ID":"f9efa33f-313e-484f-967c-1d829b6f8250","Type":"ContainerDied","Data":"f4be2056952fd7894303c984968317ba819ffc48c1263fb5e8d78d024dcf4a79"} Jan 28 15:03:52 crc kubenswrapper[4893]: W0128 15:03:52.623194 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode54303a1_baec_46eb_92e9_9beeca76bb98.slice/crio-63301dc75a213ff6c57a16ef100e7fd6aee789410a7d7a0d560a9b49f5e6b372 WatchSource:0}: Error finding container 63301dc75a213ff6c57a16ef100e7fd6aee789410a7d7a0d560a9b49f5e6b372: Status 404 returned error can't find the container with id 63301dc75a213ff6c57a16ef100e7fd6aee789410a7d7a0d560a9b49f5e6b372 Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.904129 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.970572 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:52 crc kubenswrapper[4893]: I0128 15:03:52.979991 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-vd8ml" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.439353 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:53 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:53 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:53 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.439410 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.624160 4893 generic.go:334] "Generic (PLEG): container finished" podID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" containerID="479569e987536e89a1faf6d1fb3540b92e412f8dbd1efab3f6777b205c48f9ad" exitCode=0 Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.624275 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtslh" event={"ID":"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12","Type":"ContainerDied","Data":"479569e987536e89a1faf6d1fb3540b92e412f8dbd1efab3f6777b205c48f9ad"} Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.639418 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" event={"ID":"6bd4d6ea-438a-43d5-a137-f14a6c8d75f9","Type":"ContainerStarted","Data":"1f58ca4446e9436e65f71c809581669c1471c5a55eaa22efc80946c01c066193"} Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.645195 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" event={"ID":"e54303a1-baec-46eb-92e9-9beeca76bb98","Type":"ContainerStarted","Data":"3f9ec4a163c1a671ea2ebe5d58822a1bde9d10f2810236b5a3046528d5aac46b"} Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.645250 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" event={"ID":"e54303a1-baec-46eb-92e9-9beeca76bb98","Type":"ContainerStarted","Data":"63301dc75a213ff6c57a16ef100e7fd6aee789410a7d7a0d560a9b49f5e6b372"} Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.645300 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.646776 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.694793 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-n2xxg" podStartSLOduration=18.694770169999998 podStartE2EDuration="18.69477017s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:53.691980653 +0000 UTC m=+151.465595681" watchObservedRunningTime="2026-01-28 15:03:53.69477017 +0000 UTC m=+151.468385198" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.732489 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" podStartSLOduration=131.732455532 podStartE2EDuration="2m11.732455532s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:03:53.728593796 +0000 UTC m=+151.502208844" watchObservedRunningTime="2026-01-28 15:03:53.732455532 +0000 UTC m=+151.506070560" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.803115 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-d5fwk" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.817763 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.820289 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.824640 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.824868 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.833359 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.882347 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8dc0abf-caa2-448b-b851-3bb3985b5c58-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b8dc0abf-caa2-448b-b851-3bb3985b5c58\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.882446 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8dc0abf-caa2-448b-b851-3bb3985b5c58-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b8dc0abf-caa2-448b-b851-3bb3985b5c58\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.905058 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.983531 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39c7e6c1-520f-45b8-8d19-0d77b6853f7c-kube-api-access\") pod \"39c7e6c1-520f-45b8-8d19-0d77b6853f7c\" (UID: \"39c7e6c1-520f-45b8-8d19-0d77b6853f7c\") " Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.983635 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39c7e6c1-520f-45b8-8d19-0d77b6853f7c-kubelet-dir\") pod \"39c7e6c1-520f-45b8-8d19-0d77b6853f7c\" (UID: \"39c7e6c1-520f-45b8-8d19-0d77b6853f7c\") " Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.983878 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39c7e6c1-520f-45b8-8d19-0d77b6853f7c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "39c7e6c1-520f-45b8-8d19-0d77b6853f7c" (UID: "39c7e6c1-520f-45b8-8d19-0d77b6853f7c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.983905 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8dc0abf-caa2-448b-b851-3bb3985b5c58-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b8dc0abf-caa2-448b-b851-3bb3985b5c58\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.984313 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8dc0abf-caa2-448b-b851-3bb3985b5c58-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b8dc0abf-caa2-448b-b851-3bb3985b5c58\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.984386 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8dc0abf-caa2-448b-b851-3bb3985b5c58-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b8dc0abf-caa2-448b-b851-3bb3985b5c58\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:03:53 crc kubenswrapper[4893]: I0128 15:03:53.984457 4893 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39c7e6c1-520f-45b8-8d19-0d77b6853f7c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:54 crc kubenswrapper[4893]: I0128 15:03:54.001321 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c7e6c1-520f-45b8-8d19-0d77b6853f7c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "39c7e6c1-520f-45b8-8d19-0d77b6853f7c" (UID: "39c7e6c1-520f-45b8-8d19-0d77b6853f7c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:54 crc kubenswrapper[4893]: I0128 15:03:54.002643 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8dc0abf-caa2-448b-b851-3bb3985b5c58-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b8dc0abf-caa2-448b-b851-3bb3985b5c58\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:03:54 crc kubenswrapper[4893]: I0128 15:03:54.085383 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39c7e6c1-520f-45b8-8d19-0d77b6853f7c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:54 crc kubenswrapper[4893]: I0128 15:03:54.202209 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:03:54 crc kubenswrapper[4893]: I0128 15:03:54.441140 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:54 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:54 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:54 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:54 crc kubenswrapper[4893]: I0128 15:03:54.441569 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:03:54 crc kubenswrapper[4893]: I0128 15:03:54.497104 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:03:54 crc kubenswrapper[4893]: W0128 15:03:54.509668 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb8dc0abf_caa2_448b_b851_3bb3985b5c58.slice/crio-1e91fa403707bb8c65788c7f306010e89308fae916922de8e09f93005556308b WatchSource:0}: Error finding container 1e91fa403707bb8c65788c7f306010e89308fae916922de8e09f93005556308b: Status 404 returned error can't find the container with id 1e91fa403707bb8c65788c7f306010e89308fae916922de8e09f93005556308b Jan 28 15:03:54 crc kubenswrapper[4893]: I0128 15:03:54.661994 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b8dc0abf-caa2-448b-b851-3bb3985b5c58","Type":"ContainerStarted","Data":"1e91fa403707bb8c65788c7f306010e89308fae916922de8e09f93005556308b"} Jan 28 15:03:54 crc kubenswrapper[4893]: I0128 15:03:54.672148 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"39c7e6c1-520f-45b8-8d19-0d77b6853f7c","Type":"ContainerDied","Data":"25e2578aa1bf358f0a2f768e29e7db77fb65192f27b4fdbf1bf4d5613f0b5e8c"} Jan 28 15:03:54 crc kubenswrapper[4893]: I0128 15:03:54.672225 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25e2578aa1bf358f0a2f768e29e7db77fb65192f27b4fdbf1bf4d5613f0b5e8c" Jan 28 15:03:54 crc kubenswrapper[4893]: I0128 15:03:54.672330 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:03:55 crc kubenswrapper[4893]: I0128 15:03:55.439199 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:55 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:55 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:55 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:55 crc kubenswrapper[4893]: I0128 15:03:55.439533 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:03:56 crc kubenswrapper[4893]: I0128 15:03:56.439418 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:56 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:56 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:56 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:56 crc kubenswrapper[4893]: I0128 15:03:56.439509 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:03:56 crc kubenswrapper[4893]: I0128 15:03:56.692462 4893 generic.go:334] "Generic (PLEG): container finished" podID="b8dc0abf-caa2-448b-b851-3bb3985b5c58" containerID="9c1561671302dc0bb69ee8e04ca0345d4b0cfa27e2e9f3f3e4e42e50d837caf4" exitCode=0 Jan 28 15:03:56 crc kubenswrapper[4893]: I0128 15:03:56.692585 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b8dc0abf-caa2-448b-b851-3bb3985b5c58","Type":"ContainerDied","Data":"9c1561671302dc0bb69ee8e04ca0345d4b0cfa27e2e9f3f3e4e42e50d837caf4"} Jan 28 15:03:57 crc kubenswrapper[4893]: I0128 15:03:57.446670 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:57 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:57 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:57 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:57 crc kubenswrapper[4893]: I0128 15:03:57.447127 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:03:57 crc kubenswrapper[4893]: I0128 15:03:57.588301 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:03:57 crc kubenswrapper[4893]: I0128 15:03:57.588392 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:03:57 crc kubenswrapper[4893]: I0128 15:03:57.588422 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:03:57 crc kubenswrapper[4893]: I0128 15:03:57.588454 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:03:57 crc kubenswrapper[4893]: I0128 15:03:57.706096 4893 patch_prober.go:28] interesting pod/console-f9d7485db-vzxzx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 28 15:03:57 crc kubenswrapper[4893]: I0128 15:03:57.706248 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vzxzx" podUID="7d249efd-e40b-430f-98ec-9ad9c4e5cf70" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 28 15:03:58 crc kubenswrapper[4893]: I0128 15:03:58.439005 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:58 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:58 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:58 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:58 crc kubenswrapper[4893]: I0128 15:03:58.439851 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:03:59 crc kubenswrapper[4893]: I0128 15:03:59.439132 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:03:59 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:03:59 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:03:59 crc kubenswrapper[4893]: healthz check failed Jan 28 15:03:59 crc kubenswrapper[4893]: I0128 15:03:59.439213 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:04:00 crc kubenswrapper[4893]: I0128 15:04:00.438501 4893 patch_prober.go:28] interesting pod/router-default-5444994796-xgk22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:04:00 crc kubenswrapper[4893]: [-]has-synced failed: reason withheld Jan 28 15:04:00 crc kubenswrapper[4893]: [+]process-running ok Jan 28 15:04:00 crc kubenswrapper[4893]: healthz check failed Jan 28 15:04:00 crc kubenswrapper[4893]: I0128 15:04:00.438582 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xgk22" podUID="912dd730-f999-4811-bf47-485755b7d949" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:04:01 crc kubenswrapper[4893]: I0128 15:04:01.443949 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:04:01 crc kubenswrapper[4893]: I0128 15:04:01.453703 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xgk22" Jan 28 15:04:03 crc kubenswrapper[4893]: I0128 15:04:03.044309 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:04:03 crc kubenswrapper[4893]: I0128 15:04:03.172006 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8dc0abf-caa2-448b-b851-3bb3985b5c58-kube-api-access\") pod \"b8dc0abf-caa2-448b-b851-3bb3985b5c58\" (UID: \"b8dc0abf-caa2-448b-b851-3bb3985b5c58\") " Jan 28 15:04:03 crc kubenswrapper[4893]: I0128 15:04:03.172096 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8dc0abf-caa2-448b-b851-3bb3985b5c58-kubelet-dir\") pod \"b8dc0abf-caa2-448b-b851-3bb3985b5c58\" (UID: \"b8dc0abf-caa2-448b-b851-3bb3985b5c58\") " Jan 28 15:04:03 crc kubenswrapper[4893]: I0128 15:04:03.172352 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8dc0abf-caa2-448b-b851-3bb3985b5c58-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b8dc0abf-caa2-448b-b851-3bb3985b5c58" (UID: "b8dc0abf-caa2-448b-b851-3bb3985b5c58"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:04:03 crc kubenswrapper[4893]: I0128 15:04:03.172758 4893 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8dc0abf-caa2-448b-b851-3bb3985b5c58-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:04:03 crc kubenswrapper[4893]: I0128 15:04:03.180696 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8dc0abf-caa2-448b-b851-3bb3985b5c58-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b8dc0abf-caa2-448b-b851-3bb3985b5c58" (UID: "b8dc0abf-caa2-448b-b851-3bb3985b5c58"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:04:03 crc kubenswrapper[4893]: I0128 15:04:03.274801 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8dc0abf-caa2-448b-b851-3bb3985b5c58-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:04:03 crc kubenswrapper[4893]: I0128 15:04:03.787374 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b8dc0abf-caa2-448b-b851-3bb3985b5c58","Type":"ContainerDied","Data":"1e91fa403707bb8c65788c7f306010e89308fae916922de8e09f93005556308b"} Jan 28 15:04:03 crc kubenswrapper[4893]: I0128 15:04:03.787424 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e91fa403707bb8c65788c7f306010e89308fae916922de8e09f93005556308b" Jan 28 15:04:03 crc kubenswrapper[4893]: I0128 15:04:03.787462 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:04:04 crc kubenswrapper[4893]: I0128 15:04:04.695344 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs\") pod \"network-metrics-daemon-dqjfn\" (UID: \"27c2667f-3b81-4103-b924-fd2ec1678757\") " pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:04:04 crc kubenswrapper[4893]: I0128 15:04:04.702786 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27c2667f-3b81-4103-b924-fd2ec1678757-metrics-certs\") pod \"network-metrics-daemon-dqjfn\" (UID: \"27c2667f-3b81-4103-b924-fd2ec1678757\") " pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:04:04 crc kubenswrapper[4893]: I0128 15:04:04.929048 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dqjfn" Jan 28 15:04:05 crc kubenswrapper[4893]: I0128 15:04:05.722195 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:04:05 crc kubenswrapper[4893]: I0128 15:04:05.722270 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:04:06 crc kubenswrapper[4893]: I0128 15:04:06.715122 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6q42k"] Jan 28 15:04:06 crc kubenswrapper[4893]: I0128 15:04:06.715490 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" podUID="feaf053e-d992-479b-b7ac-f7383e0b4b35" containerName="controller-manager" containerID="cri-o://0019da72b398ba32c789693724241126a01f7604c9416f74b5fae6af133b4fc2" gracePeriod=30 Jan 28 15:04:06 crc kubenswrapper[4893]: I0128 15:04:06.734494 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn"] Jan 28 15:04:06 crc kubenswrapper[4893]: I0128 15:04:06.734791 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" podUID="b5a371e6-d5dc-4971-8abf-c193da52013c" containerName="route-controller-manager" containerID="cri-o://454eaf7ce10338a288bfc49269a2bc9cdea243ac7f60fcf90ed62ab8627e447e" gracePeriod=30 Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.588425 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.588516 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.588795 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.588856 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-z2gjc" Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.588848 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.590415 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"33fa83199cc7897ad783a9d841e46883097d7f0938abaa9620456048273a708a"} pod="openshift-console/downloads-7954f5f757-z2gjc" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.590628 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" containerID="cri-o://33fa83199cc7897ad783a9d841e46883097d7f0938abaa9620456048273a708a" gracePeriod=2 Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.590683 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.590739 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.709663 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.714081 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.836078 4893 generic.go:334] "Generic (PLEG): container finished" podID="b5a371e6-d5dc-4971-8abf-c193da52013c" containerID="454eaf7ce10338a288bfc49269a2bc9cdea243ac7f60fcf90ed62ab8627e447e" exitCode=0 Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.836214 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" event={"ID":"b5a371e6-d5dc-4971-8abf-c193da52013c","Type":"ContainerDied","Data":"454eaf7ce10338a288bfc49269a2bc9cdea243ac7f60fcf90ed62ab8627e447e"} Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.840860 4893 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6q42k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.840944 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" podUID="feaf053e-d992-479b-b7ac-f7383e0b4b35" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.843438 4893 generic.go:334] "Generic (PLEG): container finished" podID="feaf053e-d992-479b-b7ac-f7383e0b4b35" containerID="0019da72b398ba32c789693724241126a01f7604c9416f74b5fae6af133b4fc2" exitCode=0 Jan 28 15:04:07 crc kubenswrapper[4893]: I0128 15:04:07.844213 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" event={"ID":"feaf053e-d992-479b-b7ac-f7383e0b4b35","Type":"ContainerDied","Data":"0019da72b398ba32c789693724241126a01f7604c9416f74b5fae6af133b4fc2"} Jan 28 15:04:08 crc kubenswrapper[4893]: I0128 15:04:08.099842 4893 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nj5sn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 28 15:04:08 crc kubenswrapper[4893]: I0128 15:04:08.099909 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" podUID="b5a371e6-d5dc-4971-8abf-c193da52013c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 28 15:04:11 crc kubenswrapper[4893]: I0128 15:04:11.871786 4893 generic.go:334] "Generic (PLEG): container finished" podID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerID="33fa83199cc7897ad783a9d841e46883097d7f0938abaa9620456048273a708a" exitCode=0 Jan 28 15:04:11 crc kubenswrapper[4893]: I0128 15:04:11.871856 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-z2gjc" event={"ID":"e1399bb5-4202-4d0e-aac3-83bec9d52d2d","Type":"ContainerDied","Data":"33fa83199cc7897ad783a9d841e46883097d7f0938abaa9620456048273a708a"} Jan 28 15:04:12 crc kubenswrapper[4893]: I0128 15:04:12.412903 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:04:17 crc kubenswrapper[4893]: I0128 15:04:17.589685 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:04:17 crc kubenswrapper[4893]: I0128 15:04:17.589781 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:04:17 crc kubenswrapper[4893]: I0128 15:04:17.840818 4893 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6q42k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 28 15:04:17 crc kubenswrapper[4893]: I0128 15:04:17.840886 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" podUID="feaf053e-d992-479b-b7ac-f7383e0b4b35" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 28 15:04:18 crc kubenswrapper[4893]: I0128 15:04:18.100166 4893 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nj5sn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 28 15:04:18 crc kubenswrapper[4893]: I0128 15:04:18.100237 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" podUID="b5a371e6-d5dc-4971-8abf-c193da52013c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 28 15:04:18 crc kubenswrapper[4893]: I0128 15:04:18.421390 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dz8b4" Jan 28 15:04:27 crc kubenswrapper[4893]: I0128 15:04:27.588375 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:04:27 crc kubenswrapper[4893]: I0128 15:04:27.588832 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:04:28 crc kubenswrapper[4893]: I0128 15:04:28.099126 4893 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nj5sn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 28 15:04:28 crc kubenswrapper[4893]: I0128 15:04:28.099183 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" podUID="b5a371e6-d5dc-4971-8abf-c193da52013c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 28 15:04:28 crc kubenswrapper[4893]: I0128 15:04:28.840895 4893 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6q42k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:04:28 crc kubenswrapper[4893]: I0128 15:04:28.841292 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" podUID="feaf053e-d992-479b-b7ac-f7383e0b4b35" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:04:29 crc kubenswrapper[4893]: I0128 15:04:29.813877 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:04:29 crc kubenswrapper[4893]: E0128 15:04:29.814365 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c7e6c1-520f-45b8-8d19-0d77b6853f7c" containerName="pruner" Jan 28 15:04:29 crc kubenswrapper[4893]: I0128 15:04:29.814487 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c7e6c1-520f-45b8-8d19-0d77b6853f7c" containerName="pruner" Jan 28 15:04:29 crc kubenswrapper[4893]: E0128 15:04:29.814556 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8dc0abf-caa2-448b-b851-3bb3985b5c58" containerName="pruner" Jan 28 15:04:29 crc kubenswrapper[4893]: I0128 15:04:29.814626 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8dc0abf-caa2-448b-b851-3bb3985b5c58" containerName="pruner" Jan 28 15:04:29 crc kubenswrapper[4893]: I0128 15:04:29.814784 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c7e6c1-520f-45b8-8d19-0d77b6853f7c" containerName="pruner" Jan 28 15:04:29 crc kubenswrapper[4893]: I0128 15:04:29.814858 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8dc0abf-caa2-448b-b851-3bb3985b5c58" containerName="pruner" Jan 28 15:04:29 crc kubenswrapper[4893]: I0128 15:04:29.815282 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:04:29 crc kubenswrapper[4893]: I0128 15:04:29.817387 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 15:04:29 crc kubenswrapper[4893]: I0128 15:04:29.817629 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 15:04:29 crc kubenswrapper[4893]: I0128 15:04:29.822700 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:04:29 crc kubenswrapper[4893]: I0128 15:04:29.979513 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8921c1c-eff5-4b34-a390-ea1bcfacb84c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c8921c1c-eff5-4b34-a390-ea1bcfacb84c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:04:29 crc kubenswrapper[4893]: I0128 15:04:29.980302 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8921c1c-eff5-4b34-a390-ea1bcfacb84c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c8921c1c-eff5-4b34-a390-ea1bcfacb84c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:04:30 crc kubenswrapper[4893]: I0128 15:04:30.096427 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8921c1c-eff5-4b34-a390-ea1bcfacb84c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c8921c1c-eff5-4b34-a390-ea1bcfacb84c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:04:30 crc kubenswrapper[4893]: I0128 15:04:30.096736 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8921c1c-eff5-4b34-a390-ea1bcfacb84c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c8921c1c-eff5-4b34-a390-ea1bcfacb84c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:04:30 crc kubenswrapper[4893]: I0128 15:04:30.096965 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8921c1c-eff5-4b34-a390-ea1bcfacb84c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c8921c1c-eff5-4b34-a390-ea1bcfacb84c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:04:30 crc kubenswrapper[4893]: I0128 15:04:30.120170 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8921c1c-eff5-4b34-a390-ea1bcfacb84c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c8921c1c-eff5-4b34-a390-ea1bcfacb84c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:04:30 crc kubenswrapper[4893]: I0128 15:04:30.133718 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:04:31 crc kubenswrapper[4893]: I0128 15:04:31.337770 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.025032 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.026333 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.040314 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.171862 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f68e997-8efc-4d18-bc36-8c55c1c80630-var-lock\") pod \"installer-9-crc\" (UID: \"6f68e997-8efc-4d18-bc36-8c55c1c80630\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.171955 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f68e997-8efc-4d18-bc36-8c55c1c80630-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6f68e997-8efc-4d18-bc36-8c55c1c80630\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.172063 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f68e997-8efc-4d18-bc36-8c55c1c80630-kube-api-access\") pod \"installer-9-crc\" (UID: \"6f68e997-8efc-4d18-bc36-8c55c1c80630\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.273812 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f68e997-8efc-4d18-bc36-8c55c1c80630-kube-api-access\") pod \"installer-9-crc\" (UID: \"6f68e997-8efc-4d18-bc36-8c55c1c80630\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.273917 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f68e997-8efc-4d18-bc36-8c55c1c80630-var-lock\") pod \"installer-9-crc\" (UID: \"6f68e997-8efc-4d18-bc36-8c55c1c80630\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.274006 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f68e997-8efc-4d18-bc36-8c55c1c80630-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6f68e997-8efc-4d18-bc36-8c55c1c80630\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.274091 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f68e997-8efc-4d18-bc36-8c55c1c80630-var-lock\") pod \"installer-9-crc\" (UID: \"6f68e997-8efc-4d18-bc36-8c55c1c80630\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.274263 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f68e997-8efc-4d18-bc36-8c55c1c80630-kubelet-dir\") pod \"installer-9-crc\" (UID: \"6f68e997-8efc-4d18-bc36-8c55c1c80630\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.294230 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f68e997-8efc-4d18-bc36-8c55c1c80630-kube-api-access\") pod \"installer-9-crc\" (UID: \"6f68e997-8efc-4d18-bc36-8c55c1c80630\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.351937 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.723083 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:04:35 crc kubenswrapper[4893]: I0128 15:04:35.723220 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:04:37 crc kubenswrapper[4893]: I0128 15:04:37.590191 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:04:37 crc kubenswrapper[4893]: I0128 15:04:37.590347 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:04:38 crc kubenswrapper[4893]: E0128 15:04:38.560492 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 15:04:38 crc kubenswrapper[4893]: E0128 15:04:38.561361 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7zdjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5gtgr_openshift-marketplace(43843abc-ea99-476a-81c0-76d6530f7c75): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:04:38 crc kubenswrapper[4893]: E0128 15:04:38.563093 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5gtgr" podUID="43843abc-ea99-476a-81c0-76d6530f7c75" Jan 28 15:04:38 crc kubenswrapper[4893]: I0128 15:04:38.840797 4893 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6q42k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:04:38 crc kubenswrapper[4893]: I0128 15:04:38.840873 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" podUID="feaf053e-d992-479b-b7ac-f7383e0b4b35" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:04:39 crc kubenswrapper[4893]: I0128 15:04:39.099553 4893 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nj5sn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:04:39 crc kubenswrapper[4893]: I0128 15:04:39.099634 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" podUID="b5a371e6-d5dc-4971-8abf-c193da52013c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:04:42 crc kubenswrapper[4893]: E0128 15:04:42.789369 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5gtgr" podUID="43843abc-ea99-476a-81c0-76d6530f7c75" Jan 28 15:04:44 crc kubenswrapper[4893]: E0128 15:04:44.873623 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 15:04:44 crc kubenswrapper[4893]: E0128 15:04:44.874082 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ncp5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6rlkf_openshift-marketplace(8b19fd77-6353-4456-afd3-00dc264d614e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:04:44 crc kubenswrapper[4893]: E0128 15:04:44.875321 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6rlkf" podUID="8b19fd77-6353-4456-afd3-00dc264d614e" Jan 28 15:04:47 crc kubenswrapper[4893]: I0128 15:04:47.588915 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:04:47 crc kubenswrapper[4893]: I0128 15:04:47.589075 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:04:48 crc kubenswrapper[4893]: I0128 15:04:48.840617 4893 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6q42k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:04:48 crc kubenswrapper[4893]: I0128 15:04:48.842096 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" podUID="feaf053e-d992-479b-b7ac-f7383e0b4b35" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:04:49 crc kubenswrapper[4893]: I0128 15:04:49.100185 4893 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nj5sn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:04:49 crc kubenswrapper[4893]: I0128 15:04:49.100245 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" podUID="b5a371e6-d5dc-4971-8abf-c193da52013c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:04:51 crc kubenswrapper[4893]: E0128 15:04:51.124193 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6rlkf" podUID="8b19fd77-6353-4456-afd3-00dc264d614e" Jan 28 15:04:57 crc kubenswrapper[4893]: I0128 15:04:57.589097 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:04:57 crc kubenswrapper[4893]: I0128 15:04:57.589653 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:04:58 crc kubenswrapper[4893]: I0128 15:04:58.841461 4893 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6q42k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:04:58 crc kubenswrapper[4893]: I0128 15:04:58.841562 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" podUID="feaf053e-d992-479b-b7ac-f7383e0b4b35" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:04:59 crc kubenswrapper[4893]: I0128 15:04:59.099655 4893 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nj5sn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:04:59 crc kubenswrapper[4893]: I0128 15:04:59.099736 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" podUID="b5a371e6-d5dc-4971-8abf-c193da52013c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:05:00 crc kubenswrapper[4893]: E0128 15:05:00.066109 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 15:05:00 crc kubenswrapper[4893]: E0128 15:05:00.066554 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ll2rd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mtslh_openshift-marketplace(0c2ed13a-5aec-42ad-80a0-1ee315e4fb12): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:05:00 crc kubenswrapper[4893]: E0128 15:05:00.067878 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mtslh" podUID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.363952 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mtslh" podUID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.371907 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.372431 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shckb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-77mgk_openshift-marketplace(f9efa33f-313e-484f-967c-1d829b6f8250): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.373998 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-77mgk" podUID="f9efa33f-313e-484f-967c-1d829b6f8250" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.384514 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.384779 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fjqm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s2wp6_openshift-marketplace(c1d61ecd-2c35-4e84-85db-9ebe350850a6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.386574 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-s2wp6" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.433586 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.433892 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hh9zt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-46wz5_openshift-marketplace(ace4b0ad-d8d3-48aa-8635-6e6e96030672): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.435207 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-46wz5" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.502169 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.524716 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.546150 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7fb7959c56-xkdmp"] Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.550630 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.550948 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p26nl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-nwlnm_openshift-marketplace(f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.551010 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a371e6-d5dc-4971-8abf-c193da52013c" containerName="route-controller-manager" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.551067 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a371e6-d5dc-4971-8abf-c193da52013c" containerName="route-controller-manager" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.551084 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feaf053e-d992-479b-b7ac-f7383e0b4b35" containerName="controller-manager" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.551090 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="feaf053e-d992-479b-b7ac-f7383e0b4b35" containerName="controller-manager" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.551338 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="feaf053e-d992-479b-b7ac-f7383e0b4b35" containerName="controller-manager" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.551350 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a371e6-d5dc-4971-8abf-c193da52013c" containerName="route-controller-manager" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.552245 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-nwlnm" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.552990 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.569362 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.569535 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8vtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-g675f_openshift-marketplace(94f2541b-4f69-4bbc-9388-c040e53d85a0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:05:05 crc kubenswrapper[4893]: E0128 15:05:05.570639 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-g675f" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.572388 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fb7959c56-xkdmp"] Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.578752 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7r8x\" (UniqueName: \"kubernetes.io/projected/feaf053e-d992-479b-b7ac-f7383e0b4b35-kube-api-access-k7r8x\") pod \"feaf053e-d992-479b-b7ac-f7383e0b4b35\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.578806 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a371e6-d5dc-4971-8abf-c193da52013c-config\") pod \"b5a371e6-d5dc-4971-8abf-c193da52013c\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.578871 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tphb\" (UniqueName: \"kubernetes.io/projected/b5a371e6-d5dc-4971-8abf-c193da52013c-kube-api-access-8tphb\") pod \"b5a371e6-d5dc-4971-8abf-c193da52013c\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.578897 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-config\") pod \"feaf053e-d992-479b-b7ac-f7383e0b4b35\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.578917 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-client-ca\") pod \"feaf053e-d992-479b-b7ac-f7383e0b4b35\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.578936 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a371e6-d5dc-4971-8abf-c193da52013c-serving-cert\") pod \"b5a371e6-d5dc-4971-8abf-c193da52013c\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.578957 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-proxy-ca-bundles\") pod \"feaf053e-d992-479b-b7ac-f7383e0b4b35\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.578986 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a371e6-d5dc-4971-8abf-c193da52013c-client-ca\") pod \"b5a371e6-d5dc-4971-8abf-c193da52013c\" (UID: \"b5a371e6-d5dc-4971-8abf-c193da52013c\") " Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.579021 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feaf053e-d992-479b-b7ac-f7383e0b4b35-serving-cert\") pod \"feaf053e-d992-479b-b7ac-f7383e0b4b35\" (UID: \"feaf053e-d992-479b-b7ac-f7383e0b4b35\") " Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.579141 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33f7e358-34be-4503-bdd1-1235b134b9cb-serving-cert\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.579182 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxbrq\" (UniqueName: \"kubernetes.io/projected/33f7e358-34be-4503-bdd1-1235b134b9cb-kube-api-access-lxbrq\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.579231 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-client-ca\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.579261 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-proxy-ca-bundles\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.579279 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-config\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.582765 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a371e6-d5dc-4971-8abf-c193da52013c-config" (OuterVolumeSpecName: "config") pod "b5a371e6-d5dc-4971-8abf-c193da52013c" (UID: "b5a371e6-d5dc-4971-8abf-c193da52013c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.583434 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a371e6-d5dc-4971-8abf-c193da52013c-client-ca" (OuterVolumeSpecName: "client-ca") pod "b5a371e6-d5dc-4971-8abf-c193da52013c" (UID: "b5a371e6-d5dc-4971-8abf-c193da52013c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.583892 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-client-ca" (OuterVolumeSpecName: "client-ca") pod "feaf053e-d992-479b-b7ac-f7383e0b4b35" (UID: "feaf053e-d992-479b-b7ac-f7383e0b4b35"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.584381 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-config" (OuterVolumeSpecName: "config") pod "feaf053e-d992-479b-b7ac-f7383e0b4b35" (UID: "feaf053e-d992-479b-b7ac-f7383e0b4b35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.586293 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "feaf053e-d992-479b-b7ac-f7383e0b4b35" (UID: "feaf053e-d992-479b-b7ac-f7383e0b4b35"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.634349 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feaf053e-d992-479b-b7ac-f7383e0b4b35-kube-api-access-k7r8x" (OuterVolumeSpecName: "kube-api-access-k7r8x") pod "feaf053e-d992-479b-b7ac-f7383e0b4b35" (UID: "feaf053e-d992-479b-b7ac-f7383e0b4b35"). InnerVolumeSpecName "kube-api-access-k7r8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.639302 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feaf053e-d992-479b-b7ac-f7383e0b4b35-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "feaf053e-d992-479b-b7ac-f7383e0b4b35" (UID: "feaf053e-d992-479b-b7ac-f7383e0b4b35"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.640013 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a371e6-d5dc-4971-8abf-c193da52013c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b5a371e6-d5dc-4971-8abf-c193da52013c" (UID: "b5a371e6-d5dc-4971-8abf-c193da52013c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.646662 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a371e6-d5dc-4971-8abf-c193da52013c-kube-api-access-8tphb" (OuterVolumeSpecName: "kube-api-access-8tphb") pod "b5a371e6-d5dc-4971-8abf-c193da52013c" (UID: "b5a371e6-d5dc-4971-8abf-c193da52013c"). InnerVolumeSpecName "kube-api-access-8tphb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.683547 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33f7e358-34be-4503-bdd1-1235b134b9cb-serving-cert\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.683646 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxbrq\" (UniqueName: \"kubernetes.io/projected/33f7e358-34be-4503-bdd1-1235b134b9cb-kube-api-access-lxbrq\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.683720 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-client-ca\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.683752 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-proxy-ca-bundles\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.683789 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-config\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.683830 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5a371e6-d5dc-4971-8abf-c193da52013c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.683857 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.683872 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5a371e6-d5dc-4971-8abf-c193da52013c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.683881 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/feaf053e-d992-479b-b7ac-f7383e0b4b35-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.683892 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7r8x\" (UniqueName: \"kubernetes.io/projected/feaf053e-d992-479b-b7ac-f7383e0b4b35-kube-api-access-k7r8x\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.683901 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a371e6-d5dc-4971-8abf-c193da52013c-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.683910 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tphb\" (UniqueName: \"kubernetes.io/projected/b5a371e6-d5dc-4971-8abf-c193da52013c-kube-api-access-8tphb\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.683933 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.683942 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/feaf053e-d992-479b-b7ac-f7383e0b4b35-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.685751 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-client-ca\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.685998 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-proxy-ca-bundles\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.687843 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-config\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.690505 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33f7e358-34be-4503-bdd1-1235b134b9cb-serving-cert\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.720773 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxbrq\" (UniqueName: \"kubernetes.io/projected/33f7e358-34be-4503-bdd1-1235b134b9cb-kube-api-access-lxbrq\") pod \"controller-manager-7fb7959c56-xkdmp\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.722923 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.722978 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.723029 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.723633 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95"} pod="openshift-machine-config-operator/machine-config-daemon-l2nht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.723682 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" containerID="cri-o://d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95" gracePeriod=600 Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.753254 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:05 crc kubenswrapper[4893]: I0128 15:05:05.916339 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.022066 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.042303 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dqjfn"] Jan 28 15:05:06 crc kubenswrapper[4893]: W0128 15:05:06.043515 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6f68e997_8efc_4d18_bc36_8c55c1c80630.slice/crio-e5c7bacf3782358de6f461e425e6f7a960dae8d6da1032e40360e6e271c1061f WatchSource:0}: Error finding container e5c7bacf3782358de6f461e425e6f7a960dae8d6da1032e40360e6e271c1061f: Status 404 returned error can't find the container with id e5c7bacf3782358de6f461e425e6f7a960dae8d6da1032e40360e6e271c1061f Jan 28 15:05:06 crc kubenswrapper[4893]: W0128 15:05:06.058048 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27c2667f_3b81_4103_b924_fd2ec1678757.slice/crio-7a83e1849556d3737abe68ed5b2592f1f0fe1ad182231ed63f38b3fff7445ffb WatchSource:0}: Error finding container 7a83e1849556d3737abe68ed5b2592f1f0fe1ad182231ed63f38b3fff7445ffb: Status 404 returned error can't find the container with id 7a83e1849556d3737abe68ed5b2592f1f0fe1ad182231ed63f38b3fff7445ffb Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.180278 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-z2gjc" event={"ID":"e1399bb5-4202-4d0e-aac3-83bec9d52d2d","Type":"ContainerStarted","Data":"796d70f2422c68af2e4afac06e33de45ef167179c9993c45b6241cfb61b72b1e"} Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.181341 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-z2gjc" Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.181410 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.181441 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.183917 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" event={"ID":"27c2667f-3b81-4103-b924-fd2ec1678757","Type":"ContainerStarted","Data":"7a83e1849556d3737abe68ed5b2592f1f0fe1ad182231ed63f38b3fff7445ffb"} Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.188656 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c8921c1c-eff5-4b34-a390-ea1bcfacb84c","Type":"ContainerStarted","Data":"a37fee967703a54c6558f095f6515f715478d0bff671baa92e47e318a584633f"} Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.201690 4893 generic.go:334] "Generic (PLEG): container finished" podID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerID="d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95" exitCode=0 Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.201755 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerDied","Data":"d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95"} Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.201779 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerStarted","Data":"20d8eb6fb2ed649557150caacec59c356900810bc0df5c731a7427a65b6878f0"} Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.203453 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" event={"ID":"b5a371e6-d5dc-4971-8abf-c193da52013c","Type":"ContainerDied","Data":"4a4342f8a1ab49b27c3a520725c46962ccd6e6937700bfa4fcd691ad2386cf5e"} Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.203542 4893 scope.go:117] "RemoveContainer" containerID="454eaf7ce10338a288bfc49269a2bc9cdea243ac7f60fcf90ed62ab8627e447e" Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.203739 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn" Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.208810 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6f68e997-8efc-4d18-bc36-8c55c1c80630","Type":"ContainerStarted","Data":"e5c7bacf3782358de6f461e425e6f7a960dae8d6da1032e40360e6e271c1061f"} Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.212981 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" Jan 28 15:05:06 crc kubenswrapper[4893]: E0128 15:05:06.217540 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-g675f" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.218551 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6q42k" event={"ID":"feaf053e-d992-479b-b7ac-f7383e0b4b35","Type":"ContainerDied","Data":"710f7e777770181058cf45d2025a1d1a810c3b4ed5c55e9149cffa6e1e8937b0"} Jan 28 15:05:06 crc kubenswrapper[4893]: E0128 15:05:06.224267 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-77mgk" podUID="f9efa33f-313e-484f-967c-1d829b6f8250" Jan 28 15:05:06 crc kubenswrapper[4893]: E0128 15:05:06.224372 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-46wz5" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.258433 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fb7959c56-xkdmp"] Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.272574 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn"] Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.276805 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nj5sn"] Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.286888 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6q42k"] Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.290647 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6q42k"] Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.900181 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5a371e6-d5dc-4971-8abf-c193da52013c" path="/var/lib/kubelet/pods/b5a371e6-d5dc-4971-8abf-c193da52013c/volumes" Jan 28 15:05:06 crc kubenswrapper[4893]: I0128 15:05:06.901931 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feaf053e-d992-479b-b7ac-f7383e0b4b35" path="/var/lib/kubelet/pods/feaf053e-d992-479b-b7ac-f7383e0b4b35/volumes" Jan 28 15:05:07 crc kubenswrapper[4893]: W0128 15:05:07.038999 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33f7e358_34be_4503_bdd1_1235b134b9cb.slice/crio-e2d338548ec36f704a41a9c9887e3a600bf16a2724de009d2431f86e419193c4 WatchSource:0}: Error finding container e2d338548ec36f704a41a9c9887e3a600bf16a2724de009d2431f86e419193c4: Status 404 returned error can't find the container with id e2d338548ec36f704a41a9c9887e3a600bf16a2724de009d2431f86e419193c4 Jan 28 15:05:07 crc kubenswrapper[4893]: I0128 15:05:07.155028 4893 scope.go:117] "RemoveContainer" containerID="0019da72b398ba32c789693724241126a01f7604c9416f74b5fae6af133b4fc2" Jan 28 15:05:07 crc kubenswrapper[4893]: I0128 15:05:07.237584 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6f68e997-8efc-4d18-bc36-8c55c1c80630","Type":"ContainerStarted","Data":"da912a2892844ddbad44c08c9325c8a961b556edb35b14102018a33635ddc6aa"} Jan 28 15:05:07 crc kubenswrapper[4893]: I0128 15:05:07.239556 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" event={"ID":"33f7e358-34be-4503-bdd1-1235b134b9cb","Type":"ContainerStarted","Data":"e2d338548ec36f704a41a9c9887e3a600bf16a2724de009d2431f86e419193c4"} Jan 28 15:05:07 crc kubenswrapper[4893]: I0128 15:05:07.246402 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" event={"ID":"27c2667f-3b81-4103-b924-fd2ec1678757","Type":"ContainerStarted","Data":"7a591e9b5740e466e4ed59587dd0f038f96d3184be3e487c45e5de24b3f57af9"} Jan 28 15:05:07 crc kubenswrapper[4893]: I0128 15:05:07.249702 4893 generic.go:334] "Generic (PLEG): container finished" podID="c8921c1c-eff5-4b34-a390-ea1bcfacb84c" containerID="8c98a7e1e9b1ebc94044c45c729f03c9c43ae2365a447152c79bda3ef130d5fd" exitCode=0 Jan 28 15:05:07 crc kubenswrapper[4893]: I0128 15:05:07.249788 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c8921c1c-eff5-4b34-a390-ea1bcfacb84c","Type":"ContainerDied","Data":"8c98a7e1e9b1ebc94044c45c729f03c9c43ae2365a447152c79bda3ef130d5fd"} Jan 28 15:05:07 crc kubenswrapper[4893]: I0128 15:05:07.259946 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:05:07 crc kubenswrapper[4893]: I0128 15:05:07.260017 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:05:07 crc kubenswrapper[4893]: I0128 15:05:07.268941 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=32.268893437 podStartE2EDuration="32.268893437s" podCreationTimestamp="2026-01-28 15:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:07.263241032 +0000 UTC m=+225.036856110" watchObservedRunningTime="2026-01-28 15:05:07.268893437 +0000 UTC m=+225.042508465" Jan 28 15:05:07 crc kubenswrapper[4893]: I0128 15:05:07.588592 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:05:07 crc kubenswrapper[4893]: I0128 15:05:07.588667 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:05:07 crc kubenswrapper[4893]: I0128 15:05:07.591070 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:05:07 crc kubenswrapper[4893]: I0128 15:05:07.591152 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.257940 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" event={"ID":"33f7e358-34be-4503-bdd1-1235b134b9cb","Type":"ContainerStarted","Data":"992428a903d8e6d9da9842fa1d382201aa0fa09acb3141eb85c3f6a109cf300c"} Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.258666 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.263509 4893 generic.go:334] "Generic (PLEG): container finished" podID="8b19fd77-6353-4456-afd3-00dc264d614e" containerID="98fe91ad98884eaac2d360cf66b662e82d5e18c889e93f50ad74307dcb12db3b" exitCode=0 Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.263617 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rlkf" event={"ID":"8b19fd77-6353-4456-afd3-00dc264d614e","Type":"ContainerDied","Data":"98fe91ad98884eaac2d360cf66b662e82d5e18c889e93f50ad74307dcb12db3b"} Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.264868 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.266539 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gtgr" event={"ID":"43843abc-ea99-476a-81c0-76d6530f7c75","Type":"ContainerStarted","Data":"3d4a9aca369ea84e9c9cd79125c3f05ae4fa265351806d8203db37dc6c33ee55"} Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.269425 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dqjfn" event={"ID":"27c2667f-3b81-4103-b924-fd2ec1678757","Type":"ContainerStarted","Data":"9ba3661debd18108346c2858cd5f58fade84e1706030754357971bdd8df3ac88"} Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.269997 4893 patch_prober.go:28] interesting pod/downloads-7954f5f757-z2gjc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.271393 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-z2gjc" podUID="e1399bb5-4202-4d0e-aac3-83bec9d52d2d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.302768 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" podStartSLOduration=42.302743219 podStartE2EDuration="42.302743219s" podCreationTimestamp="2026-01-28 15:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:08.282579687 +0000 UTC m=+226.056194705" watchObservedRunningTime="2026-01-28 15:05:08.302743219 +0000 UTC m=+226.076358267" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.327285 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-dqjfn" podStartSLOduration=206.327266542 podStartE2EDuration="3m26.327266542s" podCreationTimestamp="2026-01-28 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:08.323995842 +0000 UTC m=+226.097610880" watchObservedRunningTime="2026-01-28 15:05:08.327266542 +0000 UTC m=+226.100881560" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.413681 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv"] Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.414667 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.418949 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.419172 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.419396 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.419651 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.419866 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.424151 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.435717 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv"] Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.553419 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7404534-e869-46d4-a493-e8971172b7b0-config\") pod \"route-controller-manager-6d679b55d4-dhpkv\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.553520 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7404534-e869-46d4-a493-e8971172b7b0-client-ca\") pod \"route-controller-manager-6d679b55d4-dhpkv\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.553555 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7404534-e869-46d4-a493-e8971172b7b0-serving-cert\") pod \"route-controller-manager-6d679b55d4-dhpkv\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.553584 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxp26\" (UniqueName: \"kubernetes.io/projected/f7404534-e869-46d4-a493-e8971172b7b0-kube-api-access-xxp26\") pod \"route-controller-manager-6d679b55d4-dhpkv\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.633871 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.656215 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7404534-e869-46d4-a493-e8971172b7b0-config\") pod \"route-controller-manager-6d679b55d4-dhpkv\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.656265 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7404534-e869-46d4-a493-e8971172b7b0-client-ca\") pod \"route-controller-manager-6d679b55d4-dhpkv\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.656292 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7404534-e869-46d4-a493-e8971172b7b0-serving-cert\") pod \"route-controller-manager-6d679b55d4-dhpkv\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.656314 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxp26\" (UniqueName: \"kubernetes.io/projected/f7404534-e869-46d4-a493-e8971172b7b0-kube-api-access-xxp26\") pod \"route-controller-manager-6d679b55d4-dhpkv\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.658411 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7404534-e869-46d4-a493-e8971172b7b0-config\") pod \"route-controller-manager-6d679b55d4-dhpkv\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.658816 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7404534-e869-46d4-a493-e8971172b7b0-client-ca\") pod \"route-controller-manager-6d679b55d4-dhpkv\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.695119 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7404534-e869-46d4-a493-e8971172b7b0-serving-cert\") pod \"route-controller-manager-6d679b55d4-dhpkv\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.698523 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxp26\" (UniqueName: \"kubernetes.io/projected/f7404534-e869-46d4-a493-e8971172b7b0-kube-api-access-xxp26\") pod \"route-controller-manager-6d679b55d4-dhpkv\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.758095 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8921c1c-eff5-4b34-a390-ea1bcfacb84c-kube-api-access\") pod \"c8921c1c-eff5-4b34-a390-ea1bcfacb84c\" (UID: \"c8921c1c-eff5-4b34-a390-ea1bcfacb84c\") " Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.758207 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8921c1c-eff5-4b34-a390-ea1bcfacb84c-kubelet-dir\") pod \"c8921c1c-eff5-4b34-a390-ea1bcfacb84c\" (UID: \"c8921c1c-eff5-4b34-a390-ea1bcfacb84c\") " Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.758341 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8921c1c-eff5-4b34-a390-ea1bcfacb84c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c8921c1c-eff5-4b34-a390-ea1bcfacb84c" (UID: "c8921c1c-eff5-4b34-a390-ea1bcfacb84c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.758746 4893 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8921c1c-eff5-4b34-a390-ea1bcfacb84c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.759654 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.763715 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8921c1c-eff5-4b34-a390-ea1bcfacb84c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c8921c1c-eff5-4b34-a390-ea1bcfacb84c" (UID: "c8921c1c-eff5-4b34-a390-ea1bcfacb84c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.860540 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8921c1c-eff5-4b34-a390-ea1bcfacb84c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:08 crc kubenswrapper[4893]: I0128 15:05:08.992358 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv"] Jan 28 15:05:09 crc kubenswrapper[4893]: W0128 15:05:09.004584 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7404534_e869_46d4_a493_e8971172b7b0.slice/crio-e1d505b5f93be5d7e22a2db0b1774a015368b5eb5d67103ddc60df3953602fe9 WatchSource:0}: Error finding container e1d505b5f93be5d7e22a2db0b1774a015368b5eb5d67103ddc60df3953602fe9: Status 404 returned error can't find the container with id e1d505b5f93be5d7e22a2db0b1774a015368b5eb5d67103ddc60df3953602fe9 Jan 28 15:05:09 crc kubenswrapper[4893]: I0128 15:05:09.275973 4893 generic.go:334] "Generic (PLEG): container finished" podID="43843abc-ea99-476a-81c0-76d6530f7c75" containerID="3d4a9aca369ea84e9c9cd79125c3f05ae4fa265351806d8203db37dc6c33ee55" exitCode=0 Jan 28 15:05:09 crc kubenswrapper[4893]: I0128 15:05:09.276069 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gtgr" event={"ID":"43843abc-ea99-476a-81c0-76d6530f7c75","Type":"ContainerDied","Data":"3d4a9aca369ea84e9c9cd79125c3f05ae4fa265351806d8203db37dc6c33ee55"} Jan 28 15:05:09 crc kubenswrapper[4893]: I0128 15:05:09.279179 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" event={"ID":"f7404534-e869-46d4-a493-e8971172b7b0","Type":"ContainerStarted","Data":"fccf0514eba05cf1f4f7bd3091014112f70458b8798d34ef25f85a18bd98a245"} Jan 28 15:05:09 crc kubenswrapper[4893]: I0128 15:05:09.279287 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:09 crc kubenswrapper[4893]: I0128 15:05:09.279374 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" event={"ID":"f7404534-e869-46d4-a493-e8971172b7b0","Type":"ContainerStarted","Data":"e1d505b5f93be5d7e22a2db0b1774a015368b5eb5d67103ddc60df3953602fe9"} Jan 28 15:05:09 crc kubenswrapper[4893]: I0128 15:05:09.281035 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c8921c1c-eff5-4b34-a390-ea1bcfacb84c","Type":"ContainerDied","Data":"a37fee967703a54c6558f095f6515f715478d0bff671baa92e47e318a584633f"} Jan 28 15:05:09 crc kubenswrapper[4893]: I0128 15:05:09.281140 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a37fee967703a54c6558f095f6515f715478d0bff671baa92e47e318a584633f" Jan 28 15:05:09 crc kubenswrapper[4893]: I0128 15:05:09.281237 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:05:09 crc kubenswrapper[4893]: I0128 15:05:09.286981 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rlkf" event={"ID":"8b19fd77-6353-4456-afd3-00dc264d614e","Type":"ContainerStarted","Data":"b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435"} Jan 28 15:05:09 crc kubenswrapper[4893]: I0128 15:05:09.335370 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6rlkf" podStartSLOduration=3.18329462 podStartE2EDuration="1m19.335344898s" podCreationTimestamp="2026-01-28 15:03:50 +0000 UTC" firstStartedPulling="2026-01-28 15:03:52.60819221 +0000 UTC m=+150.381807238" lastFinishedPulling="2026-01-28 15:05:08.760242488 +0000 UTC m=+226.533857516" observedRunningTime="2026-01-28 15:05:09.334903036 +0000 UTC m=+227.108518064" watchObservedRunningTime="2026-01-28 15:05:09.335344898 +0000 UTC m=+227.108959926" Jan 28 15:05:09 crc kubenswrapper[4893]: I0128 15:05:09.912669 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:05:09 crc kubenswrapper[4893]: I0128 15:05:09.933403 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" podStartSLOduration=43.933353516 podStartE2EDuration="43.933353516s" podCreationTimestamp="2026-01-28 15:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:09.35951144 +0000 UTC m=+227.133126468" watchObservedRunningTime="2026-01-28 15:05:09.933353516 +0000 UTC m=+227.706968544" Jan 28 15:05:10 crc kubenswrapper[4893]: I0128 15:05:10.472082 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:05:10 crc kubenswrapper[4893]: I0128 15:05:10.472751 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:05:11 crc kubenswrapper[4893]: I0128 15:05:11.303609 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gtgr" event={"ID":"43843abc-ea99-476a-81c0-76d6530f7c75","Type":"ContainerStarted","Data":"3196393a71fd4433f0a75e645bcee184cc0cfb262a7fffeb39aa66bb1a00dbbc"} Jan 28 15:05:11 crc kubenswrapper[4893]: I0128 15:05:11.328202 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5gtgr" podStartSLOduration=5.592836775 podStartE2EDuration="1m24.328171031s" podCreationTimestamp="2026-01-28 15:03:47 +0000 UTC" firstStartedPulling="2026-01-28 15:03:51.583192427 +0000 UTC m=+149.356807455" lastFinishedPulling="2026-01-28 15:05:10.318526683 +0000 UTC m=+228.092141711" observedRunningTime="2026-01-28 15:05:11.325132698 +0000 UTC m=+229.098747746" watchObservedRunningTime="2026-01-28 15:05:11.328171031 +0000 UTC m=+229.101786059" Jan 28 15:05:11 crc kubenswrapper[4893]: I0128 15:05:11.866099 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6rlkf" podUID="8b19fd77-6353-4456-afd3-00dc264d614e" containerName="registry-server" probeResult="failure" output=< Jan 28 15:05:11 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 28 15:05:11 crc kubenswrapper[4893]: > Jan 28 15:05:17 crc kubenswrapper[4893]: I0128 15:05:17.602607 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-z2gjc" Jan 28 15:05:18 crc kubenswrapper[4893]: I0128 15:05:18.176946 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:05:18 crc kubenswrapper[4893]: I0128 15:05:18.177390 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:05:18 crc kubenswrapper[4893]: I0128 15:05:18.449623 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:05:18 crc kubenswrapper[4893]: I0128 15:05:18.941318 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:05:20 crc kubenswrapper[4893]: I0128 15:05:20.516615 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:05:20 crc kubenswrapper[4893]: I0128 15:05:20.551519 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:05:23 crc kubenswrapper[4893]: I0128 15:05:23.733188 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rlkf"] Jan 28 15:05:23 crc kubenswrapper[4893]: I0128 15:05:23.733531 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6rlkf" podUID="8b19fd77-6353-4456-afd3-00dc264d614e" containerName="registry-server" containerID="cri-o://b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435" gracePeriod=2 Jan 28 15:05:27 crc kubenswrapper[4893]: I0128 15:05:27.426430 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6rlkf_8b19fd77-6353-4456-afd3-00dc264d614e/registry-server/0.log" Jan 28 15:05:27 crc kubenswrapper[4893]: I0128 15:05:27.427405 4893 generic.go:334] "Generic (PLEG): container finished" podID="8b19fd77-6353-4456-afd3-00dc264d614e" containerID="b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435" exitCode=137 Jan 28 15:05:27 crc kubenswrapper[4893]: I0128 15:05:27.427452 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rlkf" event={"ID":"8b19fd77-6353-4456-afd3-00dc264d614e","Type":"ContainerDied","Data":"b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435"} Jan 28 15:05:30 crc kubenswrapper[4893]: E0128 15:05:30.472574 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435 is running failed: container process not found" containerID="b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 15:05:30 crc kubenswrapper[4893]: E0128 15:05:30.473377 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435 is running failed: container process not found" containerID="b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 15:05:30 crc kubenswrapper[4893]: E0128 15:05:30.473640 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435 is running failed: container process not found" containerID="b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 15:05:30 crc kubenswrapper[4893]: E0128 15:05:30.473674 4893 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-6rlkf" podUID="8b19fd77-6353-4456-afd3-00dc264d614e" containerName="registry-server" Jan 28 15:05:40 crc kubenswrapper[4893]: E0128 15:05:40.472424 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435 is running failed: container process not found" containerID="b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 15:05:40 crc kubenswrapper[4893]: E0128 15:05:40.473018 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435 is running failed: container process not found" containerID="b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 15:05:40 crc kubenswrapper[4893]: E0128 15:05:40.473273 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435 is running failed: container process not found" containerID="b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 15:05:40 crc kubenswrapper[4893]: E0128 15:05:40.473331 4893 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-6rlkf" podUID="8b19fd77-6353-4456-afd3-00dc264d614e" containerName="registry-server" Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.295196 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6rlkf_8b19fd77-6353-4456-afd3-00dc264d614e/registry-server/0.log" Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.296791 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.371680 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b19fd77-6353-4456-afd3-00dc264d614e-utilities\") pod \"8b19fd77-6353-4456-afd3-00dc264d614e\" (UID: \"8b19fd77-6353-4456-afd3-00dc264d614e\") " Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.372003 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncp5l\" (UniqueName: \"kubernetes.io/projected/8b19fd77-6353-4456-afd3-00dc264d614e-kube-api-access-ncp5l\") pod \"8b19fd77-6353-4456-afd3-00dc264d614e\" (UID: \"8b19fd77-6353-4456-afd3-00dc264d614e\") " Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.372159 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b19fd77-6353-4456-afd3-00dc264d614e-catalog-content\") pod \"8b19fd77-6353-4456-afd3-00dc264d614e\" (UID: \"8b19fd77-6353-4456-afd3-00dc264d614e\") " Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.372939 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b19fd77-6353-4456-afd3-00dc264d614e-utilities" (OuterVolumeSpecName: "utilities") pod "8b19fd77-6353-4456-afd3-00dc264d614e" (UID: "8b19fd77-6353-4456-afd3-00dc264d614e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.381215 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b19fd77-6353-4456-afd3-00dc264d614e-kube-api-access-ncp5l" (OuterVolumeSpecName: "kube-api-access-ncp5l") pod "8b19fd77-6353-4456-afd3-00dc264d614e" (UID: "8b19fd77-6353-4456-afd3-00dc264d614e"). InnerVolumeSpecName "kube-api-access-ncp5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.391989 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b19fd77-6353-4456-afd3-00dc264d614e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b19fd77-6353-4456-afd3-00dc264d614e" (UID: "8b19fd77-6353-4456-afd3-00dc264d614e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.474082 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b19fd77-6353-4456-afd3-00dc264d614e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.474119 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncp5l\" (UniqueName: \"kubernetes.io/projected/8b19fd77-6353-4456-afd3-00dc264d614e-kube-api-access-ncp5l\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.474131 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b19fd77-6353-4456-afd3-00dc264d614e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.517623 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6rlkf_8b19fd77-6353-4456-afd3-00dc264d614e/registry-server/0.log" Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.518458 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rlkf" event={"ID":"8b19fd77-6353-4456-afd3-00dc264d614e","Type":"ContainerDied","Data":"81a9a90ecdcaf30632377b42d24c9cbe92049ecf8a790f495f96fdcd034e7790"} Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.518541 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rlkf" Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.518543 4893 scope.go:117] "RemoveContainer" containerID="b65f4f26f42739d05d35b188a040e2464b0bda014901febd12fd77ea02257435" Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.550818 4893 scope.go:117] "RemoveContainer" containerID="98fe91ad98884eaac2d360cf66b662e82d5e18c889e93f50ad74307dcb12db3b" Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.579801 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rlkf"] Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.581083 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rlkf"] Jan 28 15:05:41 crc kubenswrapper[4893]: I0128 15:05:41.585121 4893 scope.go:117] "RemoveContainer" containerID="75e7d2f7c43a7f281adb0f68b262d0f34ab6a1ba5c50e7566c5d889a0b9abe95" Jan 28 15:05:42 crc kubenswrapper[4893]: I0128 15:05:42.526778 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtslh" event={"ID":"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12","Type":"ContainerStarted","Data":"616e13dce44cad0ae55ffbed0a8f8195bb6f01d1875a5a34ea3ae09453c2331b"} Jan 28 15:05:42 crc kubenswrapper[4893]: I0128 15:05:42.529699 4893 generic.go:334] "Generic (PLEG): container finished" podID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" containerID="0572a0380a6ec2de2ae53477eec9f2f41a3b6ad599c48e6c96604e120c17685a" exitCode=0 Jan 28 15:05:42 crc kubenswrapper[4893]: I0128 15:05:42.529738 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2wp6" event={"ID":"c1d61ecd-2c35-4e84-85db-9ebe350850a6","Type":"ContainerDied","Data":"0572a0380a6ec2de2ae53477eec9f2f41a3b6ad599c48e6c96604e120c17685a"} Jan 28 15:05:42 crc kubenswrapper[4893]: I0128 15:05:42.533500 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwlnm" event={"ID":"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2","Type":"ContainerStarted","Data":"cf99786428fb4340e15a1b7e27396705ce11c36a5509e9d98b7c8b11dc84fe64"} Jan 28 15:05:42 crc kubenswrapper[4893]: I0128 15:05:42.537535 4893 generic.go:334] "Generic (PLEG): container finished" podID="94f2541b-4f69-4bbc-9388-c040e53d85a0" containerID="fc19aea9d4aaf64072a93bc9f9ebcc442e0b07df4d0aace317583e482e4b03ae" exitCode=0 Jan 28 15:05:42 crc kubenswrapper[4893]: I0128 15:05:42.537626 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g675f" event={"ID":"94f2541b-4f69-4bbc-9388-c040e53d85a0","Type":"ContainerDied","Data":"fc19aea9d4aaf64072a93bc9f9ebcc442e0b07df4d0aace317583e482e4b03ae"} Jan 28 15:05:42 crc kubenswrapper[4893]: I0128 15:05:42.553384 4893 generic.go:334] "Generic (PLEG): container finished" podID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" containerID="4b8937070b2652128903b3264abad9f3199c4f3176fad1653a47d3898492c053" exitCode=0 Jan 28 15:05:42 crc kubenswrapper[4893]: I0128 15:05:42.553449 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46wz5" event={"ID":"ace4b0ad-d8d3-48aa-8635-6e6e96030672","Type":"ContainerDied","Data":"4b8937070b2652128903b3264abad9f3199c4f3176fad1653a47d3898492c053"} Jan 28 15:05:42 crc kubenswrapper[4893]: I0128 15:05:42.568386 4893 generic.go:334] "Generic (PLEG): container finished" podID="f9efa33f-313e-484f-967c-1d829b6f8250" containerID="72abe041597c6df0b2391a6e053b221bdb6eac1404fcf6a17287f24e73f3a86e" exitCode=0 Jan 28 15:05:42 crc kubenswrapper[4893]: I0128 15:05:42.568453 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77mgk" event={"ID":"f9efa33f-313e-484f-967c-1d829b6f8250","Type":"ContainerDied","Data":"72abe041597c6df0b2391a6e053b221bdb6eac1404fcf6a17287f24e73f3a86e"} Jan 28 15:05:42 crc kubenswrapper[4893]: I0128 15:05:42.901297 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b19fd77-6353-4456-afd3-00dc264d614e" path="/var/lib/kubelet/pods/8b19fd77-6353-4456-afd3-00dc264d614e/volumes" Jan 28 15:05:43 crc kubenswrapper[4893]: I0128 15:05:43.578650 4893 generic.go:334] "Generic (PLEG): container finished" podID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" containerID="616e13dce44cad0ae55ffbed0a8f8195bb6f01d1875a5a34ea3ae09453c2331b" exitCode=0 Jan 28 15:05:43 crc kubenswrapper[4893]: I0128 15:05:43.578759 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtslh" event={"ID":"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12","Type":"ContainerDied","Data":"616e13dce44cad0ae55ffbed0a8f8195bb6f01d1875a5a34ea3ae09453c2331b"} Jan 28 15:05:43 crc kubenswrapper[4893]: I0128 15:05:43.593737 4893 generic.go:334] "Generic (PLEG): container finished" podID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" containerID="cf99786428fb4340e15a1b7e27396705ce11c36a5509e9d98b7c8b11dc84fe64" exitCode=0 Jan 28 15:05:43 crc kubenswrapper[4893]: I0128 15:05:43.593828 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwlnm" event={"ID":"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2","Type":"ContainerDied","Data":"cf99786428fb4340e15a1b7e27396705ce11c36a5509e9d98b7c8b11dc84fe64"} Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.123127 4893 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:05:44 crc kubenswrapper[4893]: E0128 15:05:44.124009 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b19fd77-6353-4456-afd3-00dc264d614e" containerName="registry-server" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.124050 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b19fd77-6353-4456-afd3-00dc264d614e" containerName="registry-server" Jan 28 15:05:44 crc kubenswrapper[4893]: E0128 15:05:44.124064 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b19fd77-6353-4456-afd3-00dc264d614e" containerName="extract-content" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.124074 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b19fd77-6353-4456-afd3-00dc264d614e" containerName="extract-content" Jan 28 15:05:44 crc kubenswrapper[4893]: E0128 15:05:44.124115 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8921c1c-eff5-4b34-a390-ea1bcfacb84c" containerName="pruner" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.124122 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8921c1c-eff5-4b34-a390-ea1bcfacb84c" containerName="pruner" Jan 28 15:05:44 crc kubenswrapper[4893]: E0128 15:05:44.124134 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b19fd77-6353-4456-afd3-00dc264d614e" containerName="extract-utilities" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.124141 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b19fd77-6353-4456-afd3-00dc264d614e" containerName="extract-utilities" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.124286 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b19fd77-6353-4456-afd3-00dc264d614e" containerName="registry-server" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.124307 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8921c1c-eff5-4b34-a390-ea1bcfacb84c" containerName="pruner" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.124808 4893 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.125004 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.125169 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6" gracePeriod=15 Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.125268 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d" gracePeriod=15 Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.125287 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380" gracePeriod=15 Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.125373 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb" gracePeriod=15 Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.125275 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568" gracePeriod=15 Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.125490 4893 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:05:44 crc kubenswrapper[4893]: E0128 15:05:44.126050 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126077 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 15:05:44 crc kubenswrapper[4893]: E0128 15:05:44.126096 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126104 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:05:44 crc kubenswrapper[4893]: E0128 15:05:44.126115 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126124 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:05:44 crc kubenswrapper[4893]: E0128 15:05:44.126139 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126148 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:05:44 crc kubenswrapper[4893]: E0128 15:05:44.126162 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126170 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:05:44 crc kubenswrapper[4893]: E0128 15:05:44.126182 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126189 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:05:44 crc kubenswrapper[4893]: E0128 15:05:44.126201 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126208 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:05:44 crc kubenswrapper[4893]: E0128 15:05:44.126221 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126228 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126443 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126458 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126490 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126511 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126522 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126533 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.126830 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.172426 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.229265 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.229321 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.229359 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.229423 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.229486 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.229511 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.229531 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.229548 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331292 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331345 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331369 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331408 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331426 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331439 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331498 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331527 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331596 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331641 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331660 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331679 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331702 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331722 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331741 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.331759 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.463460 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.599676 4893 generic.go:334] "Generic (PLEG): container finished" podID="6f68e997-8efc-4d18-bc36-8c55c1c80630" containerID="da912a2892844ddbad44c08c9325c8a961b556edb35b14102018a33635ddc6aa" exitCode=0 Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.599755 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6f68e997-8efc-4d18-bc36-8c55c1c80630","Type":"ContainerDied","Data":"da912a2892844ddbad44c08c9325c8a961b556edb35b14102018a33635ddc6aa"} Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.600996 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.601229 4893 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.601431 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.603082 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.604153 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.604758 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380" exitCode=0 Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.604775 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb" exitCode=0 Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.604783 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568" exitCode=0 Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.604791 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d" exitCode=2 Jan 28 15:05:44 crc kubenswrapper[4893]: I0128 15:05:44.604820 4893 scope.go:117] "RemoveContainer" containerID="fb4a8d1ea8008664e401da40079224b36f4b7784bd1a80ffebc3ff26993a682c" Jan 28 15:05:45 crc kubenswrapper[4893]: E0128 15:05:45.332370 4893 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-s2wp6.188eed6a83780fec openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-s2wp6,UID:c1d61ecd-2c35-4e84-85db-9ebe350850a6,APIVersion:v1,ResourceVersion:28193,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 2.799s (2.799s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:05:45.331183596 +0000 UTC m=+263.104798634,LastTimestamp:2026-01-28 15:05:45.331183596 +0000 UTC m=+263.104798634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:05:45 crc kubenswrapper[4893]: W0128 15:05:45.355019 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-d96983d79ef2a503c33359bf9c5373cdcd7747c00e58d5ab819b7db1841a6912 WatchSource:0}: Error finding container d96983d79ef2a503c33359bf9c5373cdcd7747c00e58d5ab819b7db1841a6912: Status 404 returned error can't find the container with id d96983d79ef2a503c33359bf9c5373cdcd7747c00e58d5ab819b7db1841a6912 Jan 28 15:05:45 crc kubenswrapper[4893]: I0128 15:05:45.617402 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d96983d79ef2a503c33359bf9c5373cdcd7747c00e58d5ab819b7db1841a6912"} Jan 28 15:05:45 crc kubenswrapper[4893]: I0128 15:05:45.623328 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:05:45 crc kubenswrapper[4893]: I0128 15:05:45.913230 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:05:45 crc kubenswrapper[4893]: I0128 15:05:45.914024 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:45 crc kubenswrapper[4893]: I0128 15:05:45.914602 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.059116 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f68e997-8efc-4d18-bc36-8c55c1c80630-var-lock\") pod \"6f68e997-8efc-4d18-bc36-8c55c1c80630\" (UID: \"6f68e997-8efc-4d18-bc36-8c55c1c80630\") " Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.059205 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f68e997-8efc-4d18-bc36-8c55c1c80630-kube-api-access\") pod \"6f68e997-8efc-4d18-bc36-8c55c1c80630\" (UID: \"6f68e997-8efc-4d18-bc36-8c55c1c80630\") " Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.059234 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f68e997-8efc-4d18-bc36-8c55c1c80630-var-lock" (OuterVolumeSpecName: "var-lock") pod "6f68e997-8efc-4d18-bc36-8c55c1c80630" (UID: "6f68e997-8efc-4d18-bc36-8c55c1c80630"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.059340 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f68e997-8efc-4d18-bc36-8c55c1c80630-kubelet-dir\") pod \"6f68e997-8efc-4d18-bc36-8c55c1c80630\" (UID: \"6f68e997-8efc-4d18-bc36-8c55c1c80630\") " Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.059482 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f68e997-8efc-4d18-bc36-8c55c1c80630-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6f68e997-8efc-4d18-bc36-8c55c1c80630" (UID: "6f68e997-8efc-4d18-bc36-8c55c1c80630"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.059675 4893 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6f68e997-8efc-4d18-bc36-8c55c1c80630-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.059699 4893 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f68e997-8efc-4d18-bc36-8c55c1c80630-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.068144 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f68e997-8efc-4d18-bc36-8c55c1c80630-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6f68e997-8efc-4d18-bc36-8c55c1c80630" (UID: "6f68e997-8efc-4d18-bc36-8c55c1c80630"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.163653 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f68e997-8efc-4d18-bc36-8c55c1c80630-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.633717 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"6f68e997-8efc-4d18-bc36-8c55c1c80630","Type":"ContainerDied","Data":"e5c7bacf3782358de6f461e425e6f7a960dae8d6da1032e40360e6e271c1061f"} Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.634269 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5c7bacf3782358de6f461e425e6f7a960dae8d6da1032e40360e6e271c1061f" Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.633791 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.637643 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9afe3a9d96d6cdd33e2e392f6e713656b4a1c4c0c2e73a597b75413d41449a5c"} Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.652187 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:46 crc kubenswrapper[4893]: I0128 15:05:46.652466 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:46 crc kubenswrapper[4893]: E0128 15:05:46.801404 4893 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-s2wp6.188eed6a83780fec openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-s2wp6,UID:c1d61ecd-2c35-4e84-85db-9ebe350850a6,APIVersion:v1,ResourceVersion:28193,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 2.799s (2.799s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:05:45.331183596 +0000 UTC m=+263.104798634,LastTimestamp:2026-01-28 15:05:45.331183596 +0000 UTC m=+263.104798634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:05:47 crc kubenswrapper[4893]: I0128 15:05:47.647014 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:05:47 crc kubenswrapper[4893]: I0128 15:05:47.648427 4893 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6" exitCode=0 Jan 28 15:05:47 crc kubenswrapper[4893]: I0128 15:05:47.665827 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2wp6" event={"ID":"c1d61ecd-2c35-4e84-85db-9ebe350850a6","Type":"ContainerStarted","Data":"359accb6acde26f42c238b20b166e78083674f8c2a6955d24a10d74da3342acf"} Jan 28 15:05:47 crc kubenswrapper[4893]: I0128 15:05:47.667349 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:47 crc kubenswrapper[4893]: I0128 15:05:47.667740 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:47 crc kubenswrapper[4893]: I0128 15:05:47.668205 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:47 crc kubenswrapper[4893]: I0128 15:05:47.669226 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:47 crc kubenswrapper[4893]: I0128 15:05:47.669762 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:47 crc kubenswrapper[4893]: I0128 15:05:47.670073 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.023326 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.024618 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.025284 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.025463 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.025805 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.026340 4893 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.094943 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.095038 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.095171 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.095440 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.095504 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.095513 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.196924 4893 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.196976 4893 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.196987 4893 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.318073 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.318395 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.675396 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwlnm" event={"ID":"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2","Type":"ContainerStarted","Data":"729023193e6b1bda6fc6d60539a8e4559bcc05e677a01e03d3cea81c63c009fb"} Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.676781 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.677189 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.677832 4893 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.678544 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.679170 4893 status_manager.go:851] "Failed to get status for pod" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" pod="openshift-marketplace/redhat-operators-nwlnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nwlnm\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.679907 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.682095 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.685702 4893 scope.go:117] "RemoveContainer" containerID="4132d2592d5c12a6e2fe340b9a6bc7fb6104bcd378bb48e44da753c0ed1be380" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.717571 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.718280 4893 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.718803 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.719197 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.719561 4893 status_manager.go:851] "Failed to get status for pod" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" pod="openshift-marketplace/redhat-operators-nwlnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nwlnm\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:48 crc kubenswrapper[4893]: I0128 15:05:48.898616 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 28 15:05:49 crc kubenswrapper[4893]: E0128 15:05:49.006348 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:05:49Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:05:49Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:05:49Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:05:49Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[],\\\"sizeBytes\\\":1675675872},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:68c28a690c4c3482a63d6de9cf3b80304e983243444eb4d2c5fcaf5c051eb54b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a273081c72178c20c79eca9b18dbb926d33a6bb826b215c14de6b31207e497ca\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202349806},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:49 crc kubenswrapper[4893]: E0128 15:05:49.008006 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:49 crc kubenswrapper[4893]: E0128 15:05:49.008230 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:49 crc kubenswrapper[4893]: E0128 15:05:49.008390 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:49 crc kubenswrapper[4893]: E0128 15:05:49.008553 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:49 crc kubenswrapper[4893]: E0128 15:05:49.008572 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:05:49 crc kubenswrapper[4893]: I0128 15:05:49.314616 4893 scope.go:117] "RemoveContainer" containerID="f35a78f3d7c5ce16bc2fe995161c59bb622e5bd273c8240fd33121f069da2feb" Jan 28 15:05:49 crc kubenswrapper[4893]: I0128 15:05:49.367210 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-s2wp6" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" containerName="registry-server" probeResult="failure" output=< Jan 28 15:05:49 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 28 15:05:49 crc kubenswrapper[4893]: > Jan 28 15:05:49 crc kubenswrapper[4893]: I0128 15:05:49.690233 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:05:51 crc kubenswrapper[4893]: I0128 15:05:51.078951 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:05:51 crc kubenswrapper[4893]: I0128 15:05:51.079954 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:05:52 crc kubenswrapper[4893]: I0128 15:05:52.138357 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nwlnm" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" containerName="registry-server" probeResult="failure" output=< Jan 28 15:05:52 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 28 15:05:52 crc kubenswrapper[4893]: > Jan 28 15:05:52 crc kubenswrapper[4893]: I0128 15:05:52.894591 4893 status_manager.go:851] "Failed to get status for pod" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" pod="openshift-marketplace/redhat-operators-nwlnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nwlnm\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:52 crc kubenswrapper[4893]: I0128 15:05:52.895421 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:52 crc kubenswrapper[4893]: I0128 15:05:52.895705 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:52 crc kubenswrapper[4893]: I0128 15:05:52.896048 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:53 crc kubenswrapper[4893]: E0128 15:05:53.241870 4893 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:53 crc kubenswrapper[4893]: E0128 15:05:53.242715 4893 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:53 crc kubenswrapper[4893]: E0128 15:05:53.243100 4893 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:53 crc kubenswrapper[4893]: E0128 15:05:53.243597 4893 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:53 crc kubenswrapper[4893]: E0128 15:05:53.244178 4893 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:53 crc kubenswrapper[4893]: I0128 15:05:53.244205 4893 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 28 15:05:53 crc kubenswrapper[4893]: E0128 15:05:53.244459 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="200ms" Jan 28 15:05:53 crc kubenswrapper[4893]: E0128 15:05:53.445540 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="400ms" Jan 28 15:05:53 crc kubenswrapper[4893]: E0128 15:05:53.847089 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="800ms" Jan 28 15:05:54 crc kubenswrapper[4893]: E0128 15:05:54.648914 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="1.6s" Jan 28 15:05:56 crc kubenswrapper[4893]: E0128 15:05:56.251345 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="3.2s" Jan 28 15:05:56 crc kubenswrapper[4893]: E0128 15:05:56.802778 4893 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-s2wp6.188eed6a83780fec openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-s2wp6,UID:c1d61ecd-2c35-4e84-85db-9ebe350850a6,APIVersion:v1,ResourceVersion:28193,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 2.799s (2.799s including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:05:45.331183596 +0000 UTC m=+263.104798634,LastTimestamp:2026-01-28 15:05:45.331183596 +0000 UTC m=+263.104798634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:05:57 crc kubenswrapper[4893]: I0128 15:05:57.704100 4893 scope.go:117] "RemoveContainer" containerID="c9826f61f4823af8d3ec4b10ffe82acae60c358c9d1da819af2c4acb7aa09568" Jan 28 15:05:57 crc kubenswrapper[4893]: I0128 15:05:57.750430 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:05:57 crc kubenswrapper[4893]: I0128 15:05:57.891771 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:05:57 crc kubenswrapper[4893]: I0128 15:05:57.893759 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:57 crc kubenswrapper[4893]: I0128 15:05:57.894510 4893 status_manager.go:851] "Failed to get status for pod" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" pod="openshift-marketplace/redhat-operators-nwlnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nwlnm\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:57 crc kubenswrapper[4893]: I0128 15:05:57.895452 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:57 crc kubenswrapper[4893]: I0128 15:05:57.895728 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:57 crc kubenswrapper[4893]: I0128 15:05:57.916178 4893 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fce95a28-d92e-420e-b16d-f90868902d76" Jan 28 15:05:57 crc kubenswrapper[4893]: I0128 15:05:57.916235 4893 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fce95a28-d92e-420e-b16d-f90868902d76" Jan 28 15:05:57 crc kubenswrapper[4893]: E0128 15:05:57.916979 4893 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:05:57 crc kubenswrapper[4893]: I0128 15:05:57.917980 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:05:58 crc kubenswrapper[4893]: I0128 15:05:58.368132 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:05:58 crc kubenswrapper[4893]: I0128 15:05:58.368730 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:58 crc kubenswrapper[4893]: I0128 15:05:58.368893 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:58 crc kubenswrapper[4893]: I0128 15:05:58.369084 4893 status_manager.go:851] "Failed to get status for pod" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" pod="openshift-marketplace/redhat-operators-nwlnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nwlnm\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:58 crc kubenswrapper[4893]: I0128 15:05:58.369280 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:58 crc kubenswrapper[4893]: I0128 15:05:58.404442 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:05:58 crc kubenswrapper[4893]: I0128 15:05:58.405054 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:58 crc kubenswrapper[4893]: I0128 15:05:58.405390 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:58 crc kubenswrapper[4893]: I0128 15:05:58.405615 4893 status_manager.go:851] "Failed to get status for pod" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" pod="openshift-marketplace/redhat-operators-nwlnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nwlnm\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:58 crc kubenswrapper[4893]: I0128 15:05:58.405814 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:59 crc kubenswrapper[4893]: E0128 15:05:59.189689 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:05:59Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:05:59Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:05:59Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:05:59Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[],\\\"sizeBytes\\\":1675675872},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:68c28a690c4c3482a63d6de9cf3b80304e983243444eb4d2c5fcaf5c051eb54b\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a273081c72178c20c79eca9b18dbb926d33a6bb826b215c14de6b31207e497ca\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202349806},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:59 crc kubenswrapper[4893]: E0128 15:05:59.190179 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:59 crc kubenswrapper[4893]: E0128 15:05:59.190586 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:59 crc kubenswrapper[4893]: E0128 15:05:59.190899 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:59 crc kubenswrapper[4893]: E0128 15:05:59.191157 4893 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:59 crc kubenswrapper[4893]: E0128 15:05:59.191178 4893 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:05:59 crc kubenswrapper[4893]: E0128 15:05:59.452303 4893 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="6.4s" Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.514944 4893 scope.go:117] "RemoveContainer" containerID="af7b7bf0d596c8642a7b256f63c1b9dce907919e7d7cb2c146b7de8263b3e75d" Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.550516 4893 scope.go:117] "RemoveContainer" containerID="b1bdf3847b3f51b490841f82c8f37767e4585e38c17f2b243e3386b2b88bb5f6" Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.569160 4893 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.569820 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.639789 4893 scope.go:117] "RemoveContainer" containerID="ff1816df2ea22ecabf6084c5d4f364d88dd60858230629f3f8e430fcb0de473d" Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.770413 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"be232323578c27b7afec7882180748ddedb4c43bc90526c83f5f86baaa7d96f8"} Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.772667 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.772704 4893 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b" exitCode=1 Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.772747 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b"} Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.773217 4893 scope.go:117] "RemoveContainer" containerID="8ed4edd4cdc7e214f6d9f8ef6faee21248279641d5895752b3aa4b581b5bdb7b" Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.774189 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.774393 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.774615 4893 status_manager.go:851] "Failed to get status for pod" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" pod="openshift-marketplace/redhat-operators-nwlnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nwlnm\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.774855 4893 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:05:59 crc kubenswrapper[4893]: I0128 15:05:59.775048 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:00 crc kubenswrapper[4893]: I0128 15:06:00.784888 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtslh" event={"ID":"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12","Type":"ContainerStarted","Data":"716d2f54a411be1789b1a32a0d2d9c3de0cdb66d92d591702ce45b778af55a6c"} Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.120779 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.121519 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.121852 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.122206 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.122521 4893 status_manager.go:851] "Failed to get status for pod" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" pod="openshift-marketplace/redhat-operators-nwlnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nwlnm\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.122794 4893 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.157663 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.158393 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.159065 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.159702 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.159995 4893 status_manager.go:851] "Failed to get status for pod" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" pod="openshift-marketplace/redhat-operators-nwlnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nwlnm\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.160309 4893 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.797048 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g675f" event={"ID":"94f2541b-4f69-4bbc-9388-c040e53d85a0","Type":"ContainerStarted","Data":"78467fdbd2a193568e7e45add517b915c8ed8b5de5fd0590125fadb1d857c5a9"} Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.798282 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.798712 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.799434 4893 status_manager.go:851] "Failed to get status for pod" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" pod="openshift-marketplace/redhat-operators-nwlnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nwlnm\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.799658 4893 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.799819 4893 status_manager.go:851] "Failed to get status for pod" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" pod="openshift-marketplace/redhat-marketplace-g675f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-g675f\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.799978 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.800559 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46wz5" event={"ID":"ace4b0ad-d8d3-48aa-8635-6e6e96030672","Type":"ContainerStarted","Data":"c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8"} Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.801933 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.802321 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.802569 4893 status_manager.go:851] "Failed to get status for pod" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" pod="openshift-marketplace/redhat-operators-nwlnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nwlnm\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.803096 4893 status_manager.go:851] "Failed to get status for pod" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" pod="openshift-marketplace/certified-operators-46wz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-46wz5\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.803450 4893 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.803571 4893 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="5d4cc94e6c7ef5fc69ade20d38ae2acf37331183e5f293fee3faa20b38aa788a" exitCode=0 Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.803647 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"5d4cc94e6c7ef5fc69ade20d38ae2acf37331183e5f293fee3faa20b38aa788a"} Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.803964 4893 status_manager.go:851] "Failed to get status for pod" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" pod="openshift-marketplace/redhat-marketplace-g675f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-g675f\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.804098 4893 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fce95a28-d92e-420e-b16d-f90868902d76" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.804210 4893 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fce95a28-d92e-420e-b16d-f90868902d76" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.804801 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: E0128 15:06:01.804884 4893 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.805259 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.805700 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.806140 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.806510 4893 status_manager.go:851] "Failed to get status for pod" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" pod="openshift-marketplace/redhat-operators-nwlnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nwlnm\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.807071 4893 status_manager.go:851] "Failed to get status for pod" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" pod="openshift-marketplace/certified-operators-46wz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-46wz5\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.808543 4893 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.808919 4893 status_manager.go:851] "Failed to get status for pod" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" pod="openshift-marketplace/redhat-marketplace-g675f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-g675f\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.809189 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.810607 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"816a1ed9f5e161bb0bb9dbde56b80b842ad29b61890e7cf7a905408187b92dcd"} Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.814875 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77mgk" event={"ID":"f9efa33f-313e-484f-967c-1d829b6f8250","Type":"ContainerStarted","Data":"afafc5badef04b51797b6a729a96eb155267dd750e94cd4abf3ecbd5f14568ac"} Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.821895 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.822600 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.823316 4893 status_manager.go:851] "Failed to get status for pod" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" pod="openshift-marketplace/redhat-operators-nwlnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nwlnm\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.823813 4893 status_manager.go:851] "Failed to get status for pod" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" pod="openshift-marketplace/certified-operators-46wz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-46wz5\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.823979 4893 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.824136 4893 status_manager.go:851] "Failed to get status for pod" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" pod="openshift-marketplace/redhat-marketplace-g675f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-g675f\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.824327 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.824668 4893 status_manager.go:851] "Failed to get status for pod" podUID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" pod="openshift-marketplace/redhat-operators-mtslh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mtslh\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.824836 4893 status_manager.go:851] "Failed to get status for pod" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.825242 4893 status_manager.go:851] "Failed to get status for pod" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" pod="openshift-marketplace/community-operators-s2wp6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s2wp6\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.825602 4893 status_manager.go:851] "Failed to get status for pod" podUID="f9efa33f-313e-484f-967c-1d829b6f8250" pod="openshift-marketplace/certified-operators-77mgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-77mgk\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.826543 4893 status_manager.go:851] "Failed to get status for pod" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" pod="openshift-marketplace/redhat-operators-nwlnm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-nwlnm\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.826873 4893 status_manager.go:851] "Failed to get status for pod" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" pod="openshift-marketplace/certified-operators-46wz5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-46wz5\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.827044 4893 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.827263 4893 status_manager.go:851] "Failed to get status for pod" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" pod="openshift-marketplace/redhat-marketplace-g675f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-g675f\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:01 crc kubenswrapper[4893]: I0128 15:06:01.827417 4893 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 28 15:06:02 crc kubenswrapper[4893]: I0128 15:06:02.827775 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"307db576dc40cd01fe335e19d023fc065745eba8c891a0d5ff84f73c7ddf4901"} Jan 28 15:06:02 crc kubenswrapper[4893]: I0128 15:06:02.828507 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4abc9bfddc96292a4df58e056e433201291f699b772af043080416feff322a25"} Jan 28 15:06:02 crc kubenswrapper[4893]: I0128 15:06:02.828520 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7505a22513d5b3c8027e33f0635a4e0fc39a2127d015727d48106cf64f198af1"} Jan 28 15:06:02 crc kubenswrapper[4893]: I0128 15:06:02.955674 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:06:02 crc kubenswrapper[4893]: I0128 15:06:02.961215 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:06:03 crc kubenswrapper[4893]: I0128 15:06:03.838370 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a4dda733cb891c702a1b4c10587f18069956939afd752e523b6f4aca7201df12"} Jan 28 15:06:03 crc kubenswrapper[4893]: I0128 15:06:03.838838 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ae85db6bd0de10cb714ccfe86b786eed85bc43adf5f2bd157997510ff15024f0"} Jan 28 15:06:03 crc kubenswrapper[4893]: I0128 15:06:03.838873 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:06:03 crc kubenswrapper[4893]: I0128 15:06:03.838889 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:06:03 crc kubenswrapper[4893]: I0128 15:06:03.838667 4893 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fce95a28-d92e-420e-b16d-f90868902d76" Jan 28 15:06:03 crc kubenswrapper[4893]: I0128 15:06:03.838916 4893 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fce95a28-d92e-420e-b16d-f90868902d76" Jan 28 15:06:07 crc kubenswrapper[4893]: I0128 15:06:07.918812 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:06:07 crc kubenswrapper[4893]: I0128 15:06:07.919300 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:06:07 crc kubenswrapper[4893]: I0128 15:06:07.939702 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:06:08 crc kubenswrapper[4893]: I0128 15:06:08.099742 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:06:08 crc kubenswrapper[4893]: I0128 15:06:08.099836 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:06:08 crc kubenswrapper[4893]: I0128 15:06:08.146619 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:06:08 crc kubenswrapper[4893]: I0128 15:06:08.814785 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:06:08 crc kubenswrapper[4893]: I0128 15:06:08.815043 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:06:08 crc kubenswrapper[4893]: I0128 15:06:08.860030 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:06:08 crc kubenswrapper[4893]: I0128 15:06:08.906594 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:06:08 crc kubenswrapper[4893]: I0128 15:06:08.909292 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:06:09 crc kubenswrapper[4893]: I0128 15:06:09.085308 4893 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:06:09 crc kubenswrapper[4893]: I0128 15:06:09.173582 4893 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="19ec9990-f841-47d0-bbf3-9c7f0787137d" Jan 28 15:06:09 crc kubenswrapper[4893]: I0128 15:06:09.874831 4893 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fce95a28-d92e-420e-b16d-f90868902d76" Jan 28 15:06:09 crc kubenswrapper[4893]: I0128 15:06:09.874871 4893 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fce95a28-d92e-420e-b16d-f90868902d76" Jan 28 15:06:09 crc kubenswrapper[4893]: I0128 15:06:09.879648 4893 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="19ec9990-f841-47d0-bbf3-9c7f0787137d" Jan 28 15:06:10 crc kubenswrapper[4893]: I0128 15:06:10.064659 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:06:10 crc kubenswrapper[4893]: I0128 15:06:10.065294 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:06:10 crc kubenswrapper[4893]: I0128 15:06:10.114058 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:06:10 crc kubenswrapper[4893]: I0128 15:06:10.927163 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:06:11 crc kubenswrapper[4893]: I0128 15:06:11.492030 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:06:11 crc kubenswrapper[4893]: I0128 15:06:11.493267 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:06:11 crc kubenswrapper[4893]: I0128 15:06:11.535677 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:06:11 crc kubenswrapper[4893]: I0128 15:06:11.956199 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:06:19 crc kubenswrapper[4893]: I0128 15:06:19.509887 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 15:06:19 crc kubenswrapper[4893]: I0128 15:06:19.511723 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:06:19 crc kubenswrapper[4893]: I0128 15:06:19.518877 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 15:06:19 crc kubenswrapper[4893]: I0128 15:06:19.523373 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 15:06:19 crc kubenswrapper[4893]: I0128 15:06:19.572784 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:06:19 crc kubenswrapper[4893]: I0128 15:06:19.770870 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 15:06:20 crc kubenswrapper[4893]: I0128 15:06:20.047017 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 15:06:20 crc kubenswrapper[4893]: I0128 15:06:20.446487 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 15:06:20 crc kubenswrapper[4893]: I0128 15:06:20.506963 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 15:06:20 crc kubenswrapper[4893]: I0128 15:06:20.554647 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 15:06:20 crc kubenswrapper[4893]: I0128 15:06:20.570658 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 15:06:20 crc kubenswrapper[4893]: I0128 15:06:20.598924 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 15:06:20 crc kubenswrapper[4893]: I0128 15:06:20.676769 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 15:06:20 crc kubenswrapper[4893]: I0128 15:06:20.784858 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 15:06:20 crc kubenswrapper[4893]: I0128 15:06:20.831690 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 15:06:20 crc kubenswrapper[4893]: I0128 15:06:20.969611 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 15:06:21 crc kubenswrapper[4893]: I0128 15:06:21.025639 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 15:06:21 crc kubenswrapper[4893]: I0128 15:06:21.036198 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 15:06:21 crc kubenswrapper[4893]: I0128 15:06:21.052976 4893 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:06:21 crc kubenswrapper[4893]: I0128 15:06:21.161159 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 15:06:21 crc kubenswrapper[4893]: I0128 15:06:21.593637 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 15:06:21 crc kubenswrapper[4893]: I0128 15:06:21.700644 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 15:06:21 crc kubenswrapper[4893]: I0128 15:06:21.894126 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 15:06:22 crc kubenswrapper[4893]: I0128 15:06:22.400104 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 15:06:22 crc kubenswrapper[4893]: I0128 15:06:22.406553 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 15:06:22 crc kubenswrapper[4893]: I0128 15:06:22.472689 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 15:06:22 crc kubenswrapper[4893]: I0128 15:06:22.541520 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 15:06:22 crc kubenswrapper[4893]: I0128 15:06:22.666562 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 15:06:22 crc kubenswrapper[4893]: I0128 15:06:22.695261 4893 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 28 15:06:22 crc kubenswrapper[4893]: I0128 15:06:22.710573 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 15:06:22 crc kubenswrapper[4893]: I0128 15:06:22.798438 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 15:06:22 crc kubenswrapper[4893]: I0128 15:06:22.878773 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 15:06:22 crc kubenswrapper[4893]: I0128 15:06:22.933638 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 15:06:22 crc kubenswrapper[4893]: I0128 15:06:22.952614 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.024179 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.025102 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.138915 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.186416 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.190730 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.214745 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.222707 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.230010 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.303681 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.311843 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.515428 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.581151 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.737391 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.761831 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.896114 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 15:06:23 crc kubenswrapper[4893]: I0128 15:06:23.917895 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.142640 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.280707 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.421840 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.426860 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.537507 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.601563 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.681339 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.726961 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.758457 4893 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.760938 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=40.760910411 podStartE2EDuration="40.760910411s" podCreationTimestamp="2026-01-28 15:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:06:09.247717541 +0000 UTC m=+287.021332579" watchObservedRunningTime="2026-01-28 15:06:24.760910411 +0000 UTC m=+302.534525439" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.762020 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-46wz5" podStartSLOduration=29.781934977 podStartE2EDuration="2m37.762010083s" podCreationTimestamp="2026-01-28 15:03:47 +0000 UTC" firstStartedPulling="2026-01-28 15:03:51.549293259 +0000 UTC m=+149.322908287" lastFinishedPulling="2026-01-28 15:05:59.529368365 +0000 UTC m=+277.302983393" observedRunningTime="2026-01-28 15:06:09.166832935 +0000 UTC m=+286.940447963" watchObservedRunningTime="2026-01-28 15:06:24.762010083 +0000 UTC m=+302.535625111" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.762359 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s2wp6" podStartSLOduration=45.05113746 podStartE2EDuration="2m37.762350413s" podCreationTimestamp="2026-01-28 15:03:47 +0000 UTC" firstStartedPulling="2026-01-28 15:03:52.619941892 +0000 UTC m=+150.393556920" lastFinishedPulling="2026-01-28 15:05:45.331154835 +0000 UTC m=+263.104769873" observedRunningTime="2026-01-28 15:06:09.301292342 +0000 UTC m=+287.074907380" watchObservedRunningTime="2026-01-28 15:06:24.762350413 +0000 UTC m=+302.535965441" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.763141 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mtslh" podStartSLOduration=28.177349785 podStartE2EDuration="2m33.763133695s" podCreationTimestamp="2026-01-28 15:03:51 +0000 UTC" firstStartedPulling="2026-01-28 15:03:53.651601666 +0000 UTC m=+151.425216694" lastFinishedPulling="2026-01-28 15:05:59.237385576 +0000 UTC m=+277.011000604" observedRunningTime="2026-01-28 15:06:09.264674813 +0000 UTC m=+287.038289861" watchObservedRunningTime="2026-01-28 15:06:24.763133695 +0000 UTC m=+302.536748723" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.764297 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-77mgk" podStartSLOduration=31.679989885 podStartE2EDuration="2m36.764290197s" podCreationTimestamp="2026-01-28 15:03:48 +0000 UTC" firstStartedPulling="2026-01-28 15:03:52.619965102 +0000 UTC m=+150.393580130" lastFinishedPulling="2026-01-28 15:05:57.704265414 +0000 UTC m=+275.477880442" observedRunningTime="2026-01-28 15:06:09.320220179 +0000 UTC m=+287.093835217" watchObservedRunningTime="2026-01-28 15:06:24.764290197 +0000 UTC m=+302.537905225" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.764666 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nwlnm" podStartSLOduration=41.053972782 podStartE2EDuration="2m34.764659428s" podCreationTimestamp="2026-01-28 15:03:50 +0000 UTC" firstStartedPulling="2026-01-28 15:03:53.648656226 +0000 UTC m=+151.422271254" lastFinishedPulling="2026-01-28 15:05:47.359342872 +0000 UTC m=+265.132957900" observedRunningTime="2026-01-28 15:06:09.136554695 +0000 UTC m=+286.910169713" watchObservedRunningTime="2026-01-28 15:06:24.764659428 +0000 UTC m=+302.538274466" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.766779 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g675f" podStartSLOduration=39.830919542 podStartE2EDuration="2m35.766767817s" podCreationTimestamp="2026-01-28 15:03:49 +0000 UTC" firstStartedPulling="2026-01-28 15:03:52.60016054 +0000 UTC m=+150.373775558" lastFinishedPulling="2026-01-28 15:05:48.536008805 +0000 UTC m=+266.309623833" observedRunningTime="2026-01-28 15:06:09.236020108 +0000 UTC m=+287.009635146" watchObservedRunningTime="2026-01-28 15:06:24.766767817 +0000 UTC m=+302.540382845" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.768443 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.768672 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.769115 4893 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fce95a28-d92e-420e-b16d-f90868902d76" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.769232 4893 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="fce95a28-d92e-420e-b16d-f90868902d76" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.775361 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.777843 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.790537 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.790516622 podStartE2EDuration="15.790516622s" podCreationTimestamp="2026-01-28 15:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:06:24.787758884 +0000 UTC m=+302.561373922" watchObservedRunningTime="2026-01-28 15:06:24.790516622 +0000 UTC m=+302.564131650" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.873901 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.889897 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 15:06:24 crc kubenswrapper[4893]: I0128 15:06:24.954673 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:06:25 crc kubenswrapper[4893]: I0128 15:06:25.038735 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 15:06:25 crc kubenswrapper[4893]: I0128 15:06:25.107421 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 15:06:25 crc kubenswrapper[4893]: I0128 15:06:25.195259 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 15:06:25 crc kubenswrapper[4893]: I0128 15:06:25.237330 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:06:25 crc kubenswrapper[4893]: I0128 15:06:25.377384 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 15:06:25 crc kubenswrapper[4893]: I0128 15:06:25.414716 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 15:06:25 crc kubenswrapper[4893]: I0128 15:06:25.435061 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:06:25 crc kubenswrapper[4893]: I0128 15:06:25.464893 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 15:06:25 crc kubenswrapper[4893]: I0128 15:06:25.607706 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 15:06:25 crc kubenswrapper[4893]: I0128 15:06:25.641325 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 15:06:25 crc kubenswrapper[4893]: I0128 15:06:25.707754 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 15:06:25 crc kubenswrapper[4893]: I0128 15:06:25.886758 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 15:06:25 crc kubenswrapper[4893]: I0128 15:06:25.934334 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 15:06:25 crc kubenswrapper[4893]: I0128 15:06:25.939327 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.109248 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.112145 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.155090 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.197545 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.316305 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.353563 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.358661 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.378050 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.437409 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.503603 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.539223 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.631796 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.729872 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.924786 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.988273 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 15:06:26 crc kubenswrapper[4893]: I0128 15:06:26.991633 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.071254 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.125519 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.239463 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.348026 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.393098 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.393363 4893 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.540181 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.576573 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.587311 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.623294 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.645829 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.656067 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.702245 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.842554 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.962979 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 15:06:27 crc kubenswrapper[4893]: I0128 15:06:27.983148 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 15:06:28 crc kubenswrapper[4893]: I0128 15:06:28.144492 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 15:06:28 crc kubenswrapper[4893]: I0128 15:06:28.406626 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 15:06:28 crc kubenswrapper[4893]: I0128 15:06:28.440351 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 15:06:28 crc kubenswrapper[4893]: I0128 15:06:28.618103 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 15:06:28 crc kubenswrapper[4893]: I0128 15:06:28.656730 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 15:06:28 crc kubenswrapper[4893]: I0128 15:06:28.701506 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 15:06:28 crc kubenswrapper[4893]: I0128 15:06:28.762184 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 15:06:28 crc kubenswrapper[4893]: I0128 15:06:28.956529 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.020207 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.052410 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.059645 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.119056 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.235164 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.297681 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.338376 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.388956 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.523876 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.595659 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.633329 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.663807 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.691683 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.748825 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.844240 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.851262 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.863427 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 15:06:29 crc kubenswrapper[4893]: I0128 15:06:29.863997 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.013356 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.035533 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.086309 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.263289 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.334294 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.468431 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.511869 4893 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.512195 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9afe3a9d96d6cdd33e2e392f6e713656b4a1c4c0c2e73a597b75413d41449a5c" gracePeriod=5 Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.518988 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.557101 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.648240 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.695254 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.731156 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.828888 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 15:06:30 crc kubenswrapper[4893]: I0128 15:06:30.899540 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.095842 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.125360 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.155243 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.155984 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.189273 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.204507 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.228885 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.342877 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.451863 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.492848 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.509585 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.509791 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.558308 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.578967 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.582932 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.636576 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.639734 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.756714 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.780304 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.951575 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 15:06:31 crc kubenswrapper[4893]: I0128 15:06:31.958444 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 15:06:32 crc kubenswrapper[4893]: I0128 15:06:32.078917 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:06:32 crc kubenswrapper[4893]: I0128 15:06:32.143285 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 15:06:32 crc kubenswrapper[4893]: I0128 15:06:32.267622 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 15:06:32 crc kubenswrapper[4893]: I0128 15:06:32.361720 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:06:32 crc kubenswrapper[4893]: I0128 15:06:32.375732 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 15:06:32 crc kubenswrapper[4893]: I0128 15:06:32.463933 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 15:06:32 crc kubenswrapper[4893]: I0128 15:06:32.469252 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:06:32 crc kubenswrapper[4893]: I0128 15:06:32.635353 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 15:06:32 crc kubenswrapper[4893]: I0128 15:06:32.862954 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 15:06:32 crc kubenswrapper[4893]: I0128 15:06:32.877822 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 15:06:32 crc kubenswrapper[4893]: I0128 15:06:32.914080 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 15:06:32 crc kubenswrapper[4893]: I0128 15:06:32.971769 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.028699 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.134413 4893 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.183931 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.424512 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.439315 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.497469 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.513878 4893 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.520665 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.574112 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.687071 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.712595 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.814735 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.829205 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.885955 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 15:06:33 crc kubenswrapper[4893]: I0128 15:06:33.936112 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:06:34 crc kubenswrapper[4893]: I0128 15:06:34.003165 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 15:06:34 crc kubenswrapper[4893]: I0128 15:06:34.042571 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 15:06:34 crc kubenswrapper[4893]: I0128 15:06:34.055604 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 15:06:34 crc kubenswrapper[4893]: I0128 15:06:34.149262 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:06:34 crc kubenswrapper[4893]: I0128 15:06:34.480436 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 15:06:34 crc kubenswrapper[4893]: I0128 15:06:34.484334 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 15:06:34 crc kubenswrapper[4893]: I0128 15:06:34.622376 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 15:06:34 crc kubenswrapper[4893]: I0128 15:06:34.626711 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 15:06:34 crc kubenswrapper[4893]: I0128 15:06:34.741716 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 15:06:35 crc kubenswrapper[4893]: I0128 15:06:35.131000 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 15:06:36 crc kubenswrapper[4893]: I0128 15:06:36.831582 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 15:06:36 crc kubenswrapper[4893]: I0128 15:06:36.831926 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:06:36 crc kubenswrapper[4893]: I0128 15:06:36.898703 4893 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 28 15:06:36 crc kubenswrapper[4893]: I0128 15:06:36.920561 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:06:36 crc kubenswrapper[4893]: I0128 15:06:36.920612 4893 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="5210f9f8-6f3d-4719-b1a7-d435d3798eef" Jan 28 15:06:36 crc kubenswrapper[4893]: I0128 15:06:36.925740 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:06:36 crc kubenswrapper[4893]: I0128 15:06:36.925792 4893 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="5210f9f8-6f3d-4719-b1a7-d435d3798eef" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.021335 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.021385 4893 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9afe3a9d96d6cdd33e2e392f6e713656b4a1c4c0c2e73a597b75413d41449a5c" exitCode=137 Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.021434 4893 scope.go:117] "RemoveContainer" containerID="9afe3a9d96d6cdd33e2e392f6e713656b4a1c4c0c2e73a597b75413d41449a5c" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.021581 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.031009 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.031066 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.031187 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.031235 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.031260 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.031208 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.031422 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.031443 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.031455 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.031678 4893 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.031699 4893 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.031711 4893 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.031723 4893 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.039049 4893 scope.go:117] "RemoveContainer" containerID="9afe3a9d96d6cdd33e2e392f6e713656b4a1c4c0c2e73a597b75413d41449a5c" Jan 28 15:06:37 crc kubenswrapper[4893]: E0128 15:06:37.040182 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9afe3a9d96d6cdd33e2e392f6e713656b4a1c4c0c2e73a597b75413d41449a5c\": container with ID starting with 9afe3a9d96d6cdd33e2e392f6e713656b4a1c4c0c2e73a597b75413d41449a5c not found: ID does not exist" containerID="9afe3a9d96d6cdd33e2e392f6e713656b4a1c4c0c2e73a597b75413d41449a5c" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.040215 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9afe3a9d96d6cdd33e2e392f6e713656b4a1c4c0c2e73a597b75413d41449a5c"} err="failed to get container status \"9afe3a9d96d6cdd33e2e392f6e713656b4a1c4c0c2e73a597b75413d41449a5c\": rpc error: code = NotFound desc = could not find container \"9afe3a9d96d6cdd33e2e392f6e713656b4a1c4c0c2e73a597b75413d41449a5c\": container with ID starting with 9afe3a9d96d6cdd33e2e392f6e713656b4a1c4c0c2e73a597b75413d41449a5c not found: ID does not exist" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.040595 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:06:37 crc kubenswrapper[4893]: I0128 15:06:37.132570 4893 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:38 crc kubenswrapper[4893]: I0128 15:06:38.898869 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 28 15:06:43 crc kubenswrapper[4893]: I0128 15:06:43.456925 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 15:06:44 crc kubenswrapper[4893]: I0128 15:06:44.889183 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 15:06:46 crc kubenswrapper[4893]: I0128 15:06:46.608417 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fb7959c56-xkdmp"] Jan 28 15:06:46 crc kubenswrapper[4893]: I0128 15:06:46.609921 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" podUID="33f7e358-34be-4503-bdd1-1235b134b9cb" containerName="controller-manager" containerID="cri-o://992428a903d8e6d9da9842fa1d382201aa0fa09acb3141eb85c3f6a109cf300c" gracePeriod=30 Jan 28 15:06:46 crc kubenswrapper[4893]: I0128 15:06:46.616224 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv"] Jan 28 15:06:46 crc kubenswrapper[4893]: I0128 15:06:46.616646 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" podUID="f7404534-e869-46d4-a493-e8971172b7b0" containerName="route-controller-manager" containerID="cri-o://fccf0514eba05cf1f4f7bd3091014112f70458b8798d34ef25f85a18bd98a245" gracePeriod=30 Jan 28 15:06:46 crc kubenswrapper[4893]: I0128 15:06:46.717226 4893 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.565033 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.571208 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.682299 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7404534-e869-46d4-a493-e8971172b7b0-config\") pod \"f7404534-e869-46d4-a493-e8971172b7b0\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.682348 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-proxy-ca-bundles\") pod \"33f7e358-34be-4503-bdd1-1235b134b9cb\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.682383 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7404534-e869-46d4-a493-e8971172b7b0-serving-cert\") pod \"f7404534-e869-46d4-a493-e8971172b7b0\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.682409 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7404534-e869-46d4-a493-e8971172b7b0-client-ca\") pod \"f7404534-e869-46d4-a493-e8971172b7b0\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.682464 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-config\") pod \"33f7e358-34be-4503-bdd1-1235b134b9cb\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.682497 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxp26\" (UniqueName: \"kubernetes.io/projected/f7404534-e869-46d4-a493-e8971172b7b0-kube-api-access-xxp26\") pod \"f7404534-e869-46d4-a493-e8971172b7b0\" (UID: \"f7404534-e869-46d4-a493-e8971172b7b0\") " Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.682531 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxbrq\" (UniqueName: \"kubernetes.io/projected/33f7e358-34be-4503-bdd1-1235b134b9cb-kube-api-access-lxbrq\") pod \"33f7e358-34be-4503-bdd1-1235b134b9cb\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.682554 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-client-ca\") pod \"33f7e358-34be-4503-bdd1-1235b134b9cb\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.682575 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33f7e358-34be-4503-bdd1-1235b134b9cb-serving-cert\") pod \"33f7e358-34be-4503-bdd1-1235b134b9cb\" (UID: \"33f7e358-34be-4503-bdd1-1235b134b9cb\") " Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.685659 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "33f7e358-34be-4503-bdd1-1235b134b9cb" (UID: "33f7e358-34be-4503-bdd1-1235b134b9cb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.686124 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7404534-e869-46d4-a493-e8971172b7b0-client-ca" (OuterVolumeSpecName: "client-ca") pod "f7404534-e869-46d4-a493-e8971172b7b0" (UID: "f7404534-e869-46d4-a493-e8971172b7b0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.686297 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7404534-e869-46d4-a493-e8971172b7b0-config" (OuterVolumeSpecName: "config") pod "f7404534-e869-46d4-a493-e8971172b7b0" (UID: "f7404534-e869-46d4-a493-e8971172b7b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.687535 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-client-ca" (OuterVolumeSpecName: "client-ca") pod "33f7e358-34be-4503-bdd1-1235b134b9cb" (UID: "33f7e358-34be-4503-bdd1-1235b134b9cb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.694896 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-config" (OuterVolumeSpecName: "config") pod "33f7e358-34be-4503-bdd1-1235b134b9cb" (UID: "33f7e358-34be-4503-bdd1-1235b134b9cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.696412 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f7e358-34be-4503-bdd1-1235b134b9cb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "33f7e358-34be-4503-bdd1-1235b134b9cb" (UID: "33f7e358-34be-4503-bdd1-1235b134b9cb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.696533 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7404534-e869-46d4-a493-e8971172b7b0-kube-api-access-xxp26" (OuterVolumeSpecName: "kube-api-access-xxp26") pod "f7404534-e869-46d4-a493-e8971172b7b0" (UID: "f7404534-e869-46d4-a493-e8971172b7b0"). InnerVolumeSpecName "kube-api-access-xxp26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.696549 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7404534-e869-46d4-a493-e8971172b7b0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f7404534-e869-46d4-a493-e8971172b7b0" (UID: "f7404534-e869-46d4-a493-e8971172b7b0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.696891 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f7e358-34be-4503-bdd1-1235b134b9cb-kube-api-access-lxbrq" (OuterVolumeSpecName: "kube-api-access-lxbrq") pod "33f7e358-34be-4503-bdd1-1235b134b9cb" (UID: "33f7e358-34be-4503-bdd1-1235b134b9cb"). InnerVolumeSpecName "kube-api-access-lxbrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.783916 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7404534-e869-46d4-a493-e8971172b7b0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.783981 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.783992 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxp26\" (UniqueName: \"kubernetes.io/projected/f7404534-e869-46d4-a493-e8971172b7b0-kube-api-access-xxp26\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.784006 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxbrq\" (UniqueName: \"kubernetes.io/projected/33f7e358-34be-4503-bdd1-1235b134b9cb-kube-api-access-lxbrq\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.784015 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.784023 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33f7e358-34be-4503-bdd1-1235b134b9cb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.784032 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7404534-e869-46d4-a493-e8971172b7b0-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.784060 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33f7e358-34be-4503-bdd1-1235b134b9cb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:47 crc kubenswrapper[4893]: I0128 15:06:47.784069 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7404534-e869-46d4-a493-e8971172b7b0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.099202 4893 generic.go:334] "Generic (PLEG): container finished" podID="9a587792-e86e-434f-873e-c7ce3aac8bce" containerID="5f7dbf0ce267fc9b6893df92fe6adfff76f434e191e61748f30e887f981629b4" exitCode=0 Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.099359 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" event={"ID":"9a587792-e86e-434f-873e-c7ce3aac8bce","Type":"ContainerDied","Data":"5f7dbf0ce267fc9b6893df92fe6adfff76f434e191e61748f30e887f981629b4"} Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.100303 4893 scope.go:117] "RemoveContainer" containerID="5f7dbf0ce267fc9b6893df92fe6adfff76f434e191e61748f30e887f981629b4" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.105242 4893 generic.go:334] "Generic (PLEG): container finished" podID="33f7e358-34be-4503-bdd1-1235b134b9cb" containerID="992428a903d8e6d9da9842fa1d382201aa0fa09acb3141eb85c3f6a109cf300c" exitCode=0 Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.105439 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" event={"ID":"33f7e358-34be-4503-bdd1-1235b134b9cb","Type":"ContainerDied","Data":"992428a903d8e6d9da9842fa1d382201aa0fa09acb3141eb85c3f6a109cf300c"} Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.105515 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" event={"ID":"33f7e358-34be-4503-bdd1-1235b134b9cb","Type":"ContainerDied","Data":"e2d338548ec36f704a41a9c9887e3a600bf16a2724de009d2431f86e419193c4"} Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.105540 4893 scope.go:117] "RemoveContainer" containerID="992428a903d8e6d9da9842fa1d382201aa0fa09acb3141eb85c3f6a109cf300c" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.105564 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fb7959c56-xkdmp" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.114311 4893 generic.go:334] "Generic (PLEG): container finished" podID="f7404534-e869-46d4-a493-e8971172b7b0" containerID="fccf0514eba05cf1f4f7bd3091014112f70458b8798d34ef25f85a18bd98a245" exitCode=0 Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.114375 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" event={"ID":"f7404534-e869-46d4-a493-e8971172b7b0","Type":"ContainerDied","Data":"fccf0514eba05cf1f4f7bd3091014112f70458b8798d34ef25f85a18bd98a245"} Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.114413 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" event={"ID":"f7404534-e869-46d4-a493-e8971172b7b0","Type":"ContainerDied","Data":"e1d505b5f93be5d7e22a2db0b1774a015368b5eb5d67103ddc60df3953602fe9"} Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.114528 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.163168 4893 scope.go:117] "RemoveContainer" containerID="992428a903d8e6d9da9842fa1d382201aa0fa09acb3141eb85c3f6a109cf300c" Jan 28 15:06:48 crc kubenswrapper[4893]: E0128 15:06:48.164659 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"992428a903d8e6d9da9842fa1d382201aa0fa09acb3141eb85c3f6a109cf300c\": container with ID starting with 992428a903d8e6d9da9842fa1d382201aa0fa09acb3141eb85c3f6a109cf300c not found: ID does not exist" containerID="992428a903d8e6d9da9842fa1d382201aa0fa09acb3141eb85c3f6a109cf300c" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.165013 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"992428a903d8e6d9da9842fa1d382201aa0fa09acb3141eb85c3f6a109cf300c"} err="failed to get container status \"992428a903d8e6d9da9842fa1d382201aa0fa09acb3141eb85c3f6a109cf300c\": rpc error: code = NotFound desc = could not find container \"992428a903d8e6d9da9842fa1d382201aa0fa09acb3141eb85c3f6a109cf300c\": container with ID starting with 992428a903d8e6d9da9842fa1d382201aa0fa09acb3141eb85c3f6a109cf300c not found: ID does not exist" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.165046 4893 scope.go:117] "RemoveContainer" containerID="fccf0514eba05cf1f4f7bd3091014112f70458b8798d34ef25f85a18bd98a245" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.182706 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fb7959c56-xkdmp"] Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.188565 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7fb7959c56-xkdmp"] Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.209994 4893 scope.go:117] "RemoveContainer" containerID="fccf0514eba05cf1f4f7bd3091014112f70458b8798d34ef25f85a18bd98a245" Jan 28 15:06:48 crc kubenswrapper[4893]: E0128 15:06:48.210826 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fccf0514eba05cf1f4f7bd3091014112f70458b8798d34ef25f85a18bd98a245\": container with ID starting with fccf0514eba05cf1f4f7bd3091014112f70458b8798d34ef25f85a18bd98a245 not found: ID does not exist" containerID="fccf0514eba05cf1f4f7bd3091014112f70458b8798d34ef25f85a18bd98a245" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.210895 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fccf0514eba05cf1f4f7bd3091014112f70458b8798d34ef25f85a18bd98a245"} err="failed to get container status \"fccf0514eba05cf1f4f7bd3091014112f70458b8798d34ef25f85a18bd98a245\": rpc error: code = NotFound desc = could not find container \"fccf0514eba05cf1f4f7bd3091014112f70458b8798d34ef25f85a18bd98a245\": container with ID starting with fccf0514eba05cf1f4f7bd3091014112f70458b8798d34ef25f85a18bd98a245 not found: ID does not exist" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.229907 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv"] Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.237583 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d679b55d4-dhpkv"] Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.404109 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.404186 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.490815 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c57999846-vhncb"] Jan 28 15:06:48 crc kubenswrapper[4893]: E0128 15:06:48.491274 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7404534-e869-46d4-a493-e8971172b7b0" containerName="route-controller-manager" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.491294 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7404534-e869-46d4-a493-e8971172b7b0" containerName="route-controller-manager" Jan 28 15:06:48 crc kubenswrapper[4893]: E0128 15:06:48.491317 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.491326 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:06:48 crc kubenswrapper[4893]: E0128 15:06:48.491344 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" containerName="installer" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.491353 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" containerName="installer" Jan 28 15:06:48 crc kubenswrapper[4893]: E0128 15:06:48.491373 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f7e358-34be-4503-bdd1-1235b134b9cb" containerName="controller-manager" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.491383 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f7e358-34be-4503-bdd1-1235b134b9cb" containerName="controller-manager" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.491583 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f7e358-34be-4503-bdd1-1235b134b9cb" containerName="controller-manager" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.491602 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f68e997-8efc-4d18-bc36-8c55c1c80630" containerName="installer" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.491621 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7404534-e869-46d4-a493-e8971172b7b0" containerName="route-controller-manager" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.491631 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.492324 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.494106 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl"] Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.495001 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.497642 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdb88\" (UniqueName: \"kubernetes.io/projected/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-kube-api-access-qdb88\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.497764 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82017a0e-90b1-445c-8987-eb750e188245-client-ca\") pod \"route-controller-manager-669976547d-wnhxl\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.497804 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82017a0e-90b1-445c-8987-eb750e188245-serving-cert\") pod \"route-controller-manager-669976547d-wnhxl\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.497920 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-proxy-ca-bundles\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.498001 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82017a0e-90b1-445c-8987-eb750e188245-config\") pod \"route-controller-manager-669976547d-wnhxl\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.498049 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-serving-cert\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.498157 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb9mv\" (UniqueName: \"kubernetes.io/projected/82017a0e-90b1-445c-8987-eb750e188245-kube-api-access-gb9mv\") pod \"route-controller-manager-669976547d-wnhxl\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.498312 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-client-ca\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.498375 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-config\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.498426 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.498553 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.498684 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.499217 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.499237 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.499306 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.499560 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.499608 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.499880 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.499921 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.502989 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.505755 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.510845 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.512759 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl"] Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.518238 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c57999846-vhncb"] Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.552059 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.599923 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82017a0e-90b1-445c-8987-eb750e188245-client-ca\") pod \"route-controller-manager-669976547d-wnhxl\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.599990 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82017a0e-90b1-445c-8987-eb750e188245-serving-cert\") pod \"route-controller-manager-669976547d-wnhxl\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.600065 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-proxy-ca-bundles\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.600097 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82017a0e-90b1-445c-8987-eb750e188245-config\") pod \"route-controller-manager-669976547d-wnhxl\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.600122 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-serving-cert\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.600176 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb9mv\" (UniqueName: \"kubernetes.io/projected/82017a0e-90b1-445c-8987-eb750e188245-kube-api-access-gb9mv\") pod \"route-controller-manager-669976547d-wnhxl\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.600272 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-client-ca\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.600305 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-config\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.600356 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdb88\" (UniqueName: \"kubernetes.io/projected/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-kube-api-access-qdb88\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.601171 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82017a0e-90b1-445c-8987-eb750e188245-client-ca\") pod \"route-controller-manager-669976547d-wnhxl\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.601590 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-client-ca\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.602405 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-proxy-ca-bundles\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.603242 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82017a0e-90b1-445c-8987-eb750e188245-config\") pod \"route-controller-manager-669976547d-wnhxl\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.603248 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-config\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.607687 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82017a0e-90b1-445c-8987-eb750e188245-serving-cert\") pod \"route-controller-manager-669976547d-wnhxl\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.607781 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-serving-cert\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.629529 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdb88\" (UniqueName: \"kubernetes.io/projected/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-kube-api-access-qdb88\") pod \"controller-manager-c57999846-vhncb\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.631597 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb9mv\" (UniqueName: \"kubernetes.io/projected/82017a0e-90b1-445c-8987-eb750e188245-kube-api-access-gb9mv\") pod \"route-controller-manager-669976547d-wnhxl\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.678294 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.811862 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.819777 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.902194 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f7e358-34be-4503-bdd1-1235b134b9cb" path="/var/lib/kubelet/pods/33f7e358-34be-4503-bdd1-1235b134b9cb/volumes" Jan 28 15:06:48 crc kubenswrapper[4893]: I0128 15:06:48.903005 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7404534-e869-46d4-a493-e8971172b7b0" path="/var/lib/kubelet/pods/f7404534-e869-46d4-a493-e8971172b7b0/volumes" Jan 28 15:06:49 crc kubenswrapper[4893]: I0128 15:06:49.036371 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c57999846-vhncb"] Jan 28 15:06:49 crc kubenswrapper[4893]: W0128 15:06:49.048928 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ce4b44b_16c6_43bf_8bb3_6d40f53edc17.slice/crio-31b8d5bf73b3f3fc54bae3b72be1110fc10fa74faaaf0384fe169316f6f4f93b WatchSource:0}: Error finding container 31b8d5bf73b3f3fc54bae3b72be1110fc10fa74faaaf0384fe169316f6f4f93b: Status 404 returned error can't find the container with id 31b8d5bf73b3f3fc54bae3b72be1110fc10fa74faaaf0384fe169316f6f4f93b Jan 28 15:06:49 crc kubenswrapper[4893]: I0128 15:06:49.067768 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl"] Jan 28 15:06:49 crc kubenswrapper[4893]: I0128 15:06:49.145415 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" event={"ID":"82017a0e-90b1-445c-8987-eb750e188245","Type":"ContainerStarted","Data":"1fd0930a588a5f945cd131b0c1c3ea64490feb836761b752b25750822c726a8e"} Jan 28 15:06:49 crc kubenswrapper[4893]: I0128 15:06:49.150662 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" event={"ID":"9a587792-e86e-434f-873e-c7ce3aac8bce","Type":"ContainerStarted","Data":"2a4656bdd30cb8f1162410c9a24ad5ed87a5abd8d1bc59ab392abb0842545b50"} Jan 28 15:06:49 crc kubenswrapper[4893]: I0128 15:06:49.151077 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:06:49 crc kubenswrapper[4893]: I0128 15:06:49.170158 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c57999846-vhncb" event={"ID":"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17","Type":"ContainerStarted","Data":"31b8d5bf73b3f3fc54bae3b72be1110fc10fa74faaaf0384fe169316f6f4f93b"} Jan 28 15:06:49 crc kubenswrapper[4893]: I0128 15:06:49.170292 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:06:50 crc kubenswrapper[4893]: I0128 15:06:50.180598 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c57999846-vhncb" event={"ID":"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17","Type":"ContainerStarted","Data":"9c77b271d2c163c048721f32babea3a1143126b9d5cd4ca0588e3bc7c8c7b6be"} Jan 28 15:06:50 crc kubenswrapper[4893]: I0128 15:06:50.181082 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:50 crc kubenswrapper[4893]: I0128 15:06:50.185587 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" event={"ID":"82017a0e-90b1-445c-8987-eb750e188245","Type":"ContainerStarted","Data":"e549ddc335acfde11e89073d6245159fd7dbe91cc54b60e3b56fb27e2a33bf50"} Jan 28 15:06:50 crc kubenswrapper[4893]: I0128 15:06:50.185898 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:50 crc kubenswrapper[4893]: I0128 15:06:50.186109 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:06:50 crc kubenswrapper[4893]: I0128 15:06:50.193607 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:06:50 crc kubenswrapper[4893]: I0128 15:06:50.210717 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c57999846-vhncb" podStartSLOduration=4.21069509 podStartE2EDuration="4.21069509s" podCreationTimestamp="2026-01-28 15:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:06:50.205721188 +0000 UTC m=+327.979336216" watchObservedRunningTime="2026-01-28 15:06:50.21069509 +0000 UTC m=+327.984310118" Jan 28 15:06:50 crc kubenswrapper[4893]: I0128 15:06:50.260849 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" podStartSLOduration=4.260808454 podStartE2EDuration="4.260808454s" podCreationTimestamp="2026-01-28 15:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:06:50.246841997 +0000 UTC m=+328.020457065" watchObservedRunningTime="2026-01-28 15:06:50.260808454 +0000 UTC m=+328.034423512" Jan 28 15:06:50 crc kubenswrapper[4893]: I0128 15:06:50.713569 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 15:06:52 crc kubenswrapper[4893]: I0128 15:06:52.407006 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 15:06:53 crc kubenswrapper[4893]: I0128 15:06:53.719395 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 15:06:53 crc kubenswrapper[4893]: I0128 15:06:53.858182 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 15:06:55 crc kubenswrapper[4893]: I0128 15:06:55.172079 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 15:06:55 crc kubenswrapper[4893]: I0128 15:06:55.592422 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 15:06:56 crc kubenswrapper[4893]: I0128 15:06:56.126650 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 15:06:56 crc kubenswrapper[4893]: I0128 15:06:56.582757 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 15:06:57 crc kubenswrapper[4893]: I0128 15:06:57.094152 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 15:06:57 crc kubenswrapper[4893]: I0128 15:06:57.322424 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 15:06:57 crc kubenswrapper[4893]: I0128 15:06:57.329983 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 15:06:57 crc kubenswrapper[4893]: I0128 15:06:57.742173 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 15:06:57 crc kubenswrapper[4893]: I0128 15:06:57.782844 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 15:06:58 crc kubenswrapper[4893]: I0128 15:06:58.182149 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 15:06:58 crc kubenswrapper[4893]: I0128 15:06:58.183166 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 15:06:58 crc kubenswrapper[4893]: I0128 15:06:58.448667 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 15:06:59 crc kubenswrapper[4893]: I0128 15:06:59.448026 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 15:07:00 crc kubenswrapper[4893]: I0128 15:07:00.055034 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 15:07:00 crc kubenswrapper[4893]: I0128 15:07:00.507350 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 15:07:01 crc kubenswrapper[4893]: I0128 15:07:01.085514 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 15:07:01 crc kubenswrapper[4893]: I0128 15:07:01.092954 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 15:07:01 crc kubenswrapper[4893]: I0128 15:07:01.538758 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 15:07:02 crc kubenswrapper[4893]: I0128 15:07:02.732952 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 15:07:02 crc kubenswrapper[4893]: I0128 15:07:02.796513 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 15:07:02 crc kubenswrapper[4893]: I0128 15:07:02.852936 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 15:07:03 crc kubenswrapper[4893]: I0128 15:07:03.146122 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 15:07:03 crc kubenswrapper[4893]: I0128 15:07:03.271049 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 15:07:03 crc kubenswrapper[4893]: I0128 15:07:03.891394 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 15:07:04 crc kubenswrapper[4893]: I0128 15:07:04.594992 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 15:07:04 crc kubenswrapper[4893]: I0128 15:07:04.770220 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 15:07:05 crc kubenswrapper[4893]: I0128 15:07:05.076245 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 15:07:05 crc kubenswrapper[4893]: I0128 15:07:05.501989 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:07:05 crc kubenswrapper[4893]: I0128 15:07:05.936120 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 15:07:06 crc kubenswrapper[4893]: I0128 15:07:06.220438 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 15:07:06 crc kubenswrapper[4893]: I0128 15:07:06.481836 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 15:07:06 crc kubenswrapper[4893]: I0128 15:07:06.575824 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c57999846-vhncb"] Jan 28 15:07:06 crc kubenswrapper[4893]: I0128 15:07:06.576132 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c57999846-vhncb" podUID="2ce4b44b-16c6-43bf-8bb3-6d40f53edc17" containerName="controller-manager" containerID="cri-o://9c77b271d2c163c048721f32babea3a1143126b9d5cd4ca0588e3bc7c8c7b6be" gracePeriod=30 Jan 28 15:07:06 crc kubenswrapper[4893]: I0128 15:07:06.587967 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl"] Jan 28 15:07:06 crc kubenswrapper[4893]: I0128 15:07:06.588251 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" podUID="82017a0e-90b1-445c-8987-eb750e188245" containerName="route-controller-manager" containerID="cri-o://e549ddc335acfde11e89073d6245159fd7dbe91cc54b60e3b56fb27e2a33bf50" gracePeriod=30 Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.285078 4893 generic.go:334] "Generic (PLEG): container finished" podID="2ce4b44b-16c6-43bf-8bb3-6d40f53edc17" containerID="9c77b271d2c163c048721f32babea3a1143126b9d5cd4ca0588e3bc7c8c7b6be" exitCode=0 Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.285172 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c57999846-vhncb" event={"ID":"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17","Type":"ContainerDied","Data":"9c77b271d2c163c048721f32babea3a1143126b9d5cd4ca0588e3bc7c8c7b6be"} Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.769996 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.799848 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b4c755c99-r5824"] Jan 28 15:07:07 crc kubenswrapper[4893]: E0128 15:07:07.800104 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce4b44b-16c6-43bf-8bb3-6d40f53edc17" containerName="controller-manager" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.800118 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce4b44b-16c6-43bf-8bb3-6d40f53edc17" containerName="controller-manager" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.800215 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce4b44b-16c6-43bf-8bb3-6d40f53edc17" containerName="controller-manager" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.800674 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.815837 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b4c755c99-r5824"] Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.860745 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-serving-cert\") pod \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.860799 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-config\") pod \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.860872 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdb88\" (UniqueName: \"kubernetes.io/projected/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-kube-api-access-qdb88\") pod \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.860906 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-client-ca\") pod \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.860960 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-proxy-ca-bundles\") pod \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\" (UID: \"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17\") " Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.861630 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-client-ca" (OuterVolumeSpecName: "client-ca") pod "2ce4b44b-16c6-43bf-8bb3-6d40f53edc17" (UID: "2ce4b44b-16c6-43bf-8bb3-6d40f53edc17"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.861683 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2ce4b44b-16c6-43bf-8bb3-6d40f53edc17" (UID: "2ce4b44b-16c6-43bf-8bb3-6d40f53edc17"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.861797 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-config" (OuterVolumeSpecName: "config") pod "2ce4b44b-16c6-43bf-8bb3-6d40f53edc17" (UID: "2ce4b44b-16c6-43bf-8bb3-6d40f53edc17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.869704 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2ce4b44b-16c6-43bf-8bb3-6d40f53edc17" (UID: "2ce4b44b-16c6-43bf-8bb3-6d40f53edc17"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.869791 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-kube-api-access-qdb88" (OuterVolumeSpecName: "kube-api-access-qdb88") pod "2ce4b44b-16c6-43bf-8bb3-6d40f53edc17" (UID: "2ce4b44b-16c6-43bf-8bb3-6d40f53edc17"). InnerVolumeSpecName "kube-api-access-qdb88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.962821 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-client-ca\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.962876 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8smvn\" (UniqueName: \"kubernetes.io/projected/831a3335-0187-40fa-bc8b-df4df6b51c4c-kube-api-access-8smvn\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.962942 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-proxy-ca-bundles\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.962964 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-config\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.963083 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/831a3335-0187-40fa-bc8b-df4df6b51c4c-serving-cert\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.963161 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdb88\" (UniqueName: \"kubernetes.io/projected/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-kube-api-access-qdb88\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.963181 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.963194 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.963205 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:07 crc kubenswrapper[4893]: I0128 15:07:07.963216 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.064561 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-proxy-ca-bundles\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.064966 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-config\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.064995 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/831a3335-0187-40fa-bc8b-df4df6b51c4c-serving-cert\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.065040 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-client-ca\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.065060 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8smvn\" (UniqueName: \"kubernetes.io/projected/831a3335-0187-40fa-bc8b-df4df6b51c4c-kube-api-access-8smvn\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.067357 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-proxy-ca-bundles\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.067751 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-config\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.068040 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-client-ca\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.080405 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/831a3335-0187-40fa-bc8b-df4df6b51c4c-serving-cert\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.083681 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8smvn\" (UniqueName: \"kubernetes.io/projected/831a3335-0187-40fa-bc8b-df4df6b51c4c-kube-api-access-8smvn\") pod \"controller-manager-b4c755c99-r5824\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.119298 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.147810 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.268225 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb9mv\" (UniqueName: \"kubernetes.io/projected/82017a0e-90b1-445c-8987-eb750e188245-kube-api-access-gb9mv\") pod \"82017a0e-90b1-445c-8987-eb750e188245\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.268345 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82017a0e-90b1-445c-8987-eb750e188245-config\") pod \"82017a0e-90b1-445c-8987-eb750e188245\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.268379 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82017a0e-90b1-445c-8987-eb750e188245-client-ca\") pod \"82017a0e-90b1-445c-8987-eb750e188245\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.268410 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82017a0e-90b1-445c-8987-eb750e188245-serving-cert\") pod \"82017a0e-90b1-445c-8987-eb750e188245\" (UID: \"82017a0e-90b1-445c-8987-eb750e188245\") " Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.270121 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82017a0e-90b1-445c-8987-eb750e188245-config" (OuterVolumeSpecName: "config") pod "82017a0e-90b1-445c-8987-eb750e188245" (UID: "82017a0e-90b1-445c-8987-eb750e188245"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.270344 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82017a0e-90b1-445c-8987-eb750e188245-client-ca" (OuterVolumeSpecName: "client-ca") pod "82017a0e-90b1-445c-8987-eb750e188245" (UID: "82017a0e-90b1-445c-8987-eb750e188245"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.273672 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82017a0e-90b1-445c-8987-eb750e188245-kube-api-access-gb9mv" (OuterVolumeSpecName: "kube-api-access-gb9mv") pod "82017a0e-90b1-445c-8987-eb750e188245" (UID: "82017a0e-90b1-445c-8987-eb750e188245"). InnerVolumeSpecName "kube-api-access-gb9mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.274364 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82017a0e-90b1-445c-8987-eb750e188245-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "82017a0e-90b1-445c-8987-eb750e188245" (UID: "82017a0e-90b1-445c-8987-eb750e188245"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.294944 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c57999846-vhncb" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.295003 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c57999846-vhncb" event={"ID":"2ce4b44b-16c6-43bf-8bb3-6d40f53edc17","Type":"ContainerDied","Data":"31b8d5bf73b3f3fc54bae3b72be1110fc10fa74faaaf0384fe169316f6f4f93b"} Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.295079 4893 scope.go:117] "RemoveContainer" containerID="9c77b271d2c163c048721f32babea3a1143126b9d5cd4ca0588e3bc7c8c7b6be" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.296857 4893 generic.go:334] "Generic (PLEG): container finished" podID="82017a0e-90b1-445c-8987-eb750e188245" containerID="e549ddc335acfde11e89073d6245159fd7dbe91cc54b60e3b56fb27e2a33bf50" exitCode=0 Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.296892 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" event={"ID":"82017a0e-90b1-445c-8987-eb750e188245","Type":"ContainerDied","Data":"e549ddc335acfde11e89073d6245159fd7dbe91cc54b60e3b56fb27e2a33bf50"} Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.296916 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" event={"ID":"82017a0e-90b1-445c-8987-eb750e188245","Type":"ContainerDied","Data":"1fd0930a588a5f945cd131b0c1c3ea64490feb836761b752b25750822c726a8e"} Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.296951 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.328780 4893 scope.go:117] "RemoveContainer" containerID="e549ddc335acfde11e89073d6245159fd7dbe91cc54b60e3b56fb27e2a33bf50" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.328910 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c57999846-vhncb"] Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.332992 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c57999846-vhncb"] Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.338363 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl"] Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.342743 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-669976547d-wnhxl"] Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.347918 4893 scope.go:117] "RemoveContainer" containerID="e549ddc335acfde11e89073d6245159fd7dbe91cc54b60e3b56fb27e2a33bf50" Jan 28 15:07:08 crc kubenswrapper[4893]: E0128 15:07:08.348431 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e549ddc335acfde11e89073d6245159fd7dbe91cc54b60e3b56fb27e2a33bf50\": container with ID starting with e549ddc335acfde11e89073d6245159fd7dbe91cc54b60e3b56fb27e2a33bf50 not found: ID does not exist" containerID="e549ddc335acfde11e89073d6245159fd7dbe91cc54b60e3b56fb27e2a33bf50" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.348512 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e549ddc335acfde11e89073d6245159fd7dbe91cc54b60e3b56fb27e2a33bf50"} err="failed to get container status \"e549ddc335acfde11e89073d6245159fd7dbe91cc54b60e3b56fb27e2a33bf50\": rpc error: code = NotFound desc = could not find container \"e549ddc335acfde11e89073d6245159fd7dbe91cc54b60e3b56fb27e2a33bf50\": container with ID starting with e549ddc335acfde11e89073d6245159fd7dbe91cc54b60e3b56fb27e2a33bf50 not found: ID does not exist" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.370309 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82017a0e-90b1-445c-8987-eb750e188245-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.370362 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/82017a0e-90b1-445c-8987-eb750e188245-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.370378 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82017a0e-90b1-445c-8987-eb750e188245-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.370394 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb9mv\" (UniqueName: \"kubernetes.io/projected/82017a0e-90b1-445c-8987-eb750e188245-kube-api-access-gb9mv\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.533826 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b4c755c99-r5824"] Jan 28 15:07:08 crc kubenswrapper[4893]: W0128 15:07:08.545648 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod831a3335_0187_40fa_bc8b_df4df6b51c4c.slice/crio-3854f801f986f3060e8a6a40252ebdfb5ef290a022efc884f6ee3d9db83af70f WatchSource:0}: Error finding container 3854f801f986f3060e8a6a40252ebdfb5ef290a022efc884f6ee3d9db83af70f: Status 404 returned error can't find the container with id 3854f801f986f3060e8a6a40252ebdfb5ef290a022efc884f6ee3d9db83af70f Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.899887 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce4b44b-16c6-43bf-8bb3-6d40f53edc17" path="/var/lib/kubelet/pods/2ce4b44b-16c6-43bf-8bb3-6d40f53edc17/volumes" Jan 28 15:07:08 crc kubenswrapper[4893]: I0128 15:07:08.900803 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82017a0e-90b1-445c-8987-eb750e188245" path="/var/lib/kubelet/pods/82017a0e-90b1-445c-8987-eb750e188245/volumes" Jan 28 15:07:09 crc kubenswrapper[4893]: I0128 15:07:09.304243 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" event={"ID":"831a3335-0187-40fa-bc8b-df4df6b51c4c","Type":"ContainerStarted","Data":"b37d9e3637b1953cfc4cc3b86ab0b337a28e357402e48e831cdd9dd76f741768"} Jan 28 15:07:09 crc kubenswrapper[4893]: I0128 15:07:09.304290 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" event={"ID":"831a3335-0187-40fa-bc8b-df4df6b51c4c","Type":"ContainerStarted","Data":"3854f801f986f3060e8a6a40252ebdfb5ef290a022efc884f6ee3d9db83af70f"} Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.145124 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.225131 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.316271 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.320981 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.333981 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" podStartSLOduration=4.333959106 podStartE2EDuration="4.333959106s" podCreationTimestamp="2026-01-28 15:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:07:10.332343241 +0000 UTC m=+348.105958269" watchObservedRunningTime="2026-01-28 15:07:10.333959106 +0000 UTC m=+348.107574134" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.504825 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq"] Jan 28 15:07:10 crc kubenswrapper[4893]: E0128 15:07:10.505332 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82017a0e-90b1-445c-8987-eb750e188245" containerName="route-controller-manager" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.505400 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="82017a0e-90b1-445c-8987-eb750e188245" containerName="route-controller-manager" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.505614 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="82017a0e-90b1-445c-8987-eb750e188245" containerName="route-controller-manager" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.506084 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.508121 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.508459 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.509025 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.509041 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.509322 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.509452 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.523837 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq"] Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.601605 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/906223d4-a28a-4317-bf9f-8513b8c8aa3c-serving-cert\") pod \"route-controller-manager-f44897b5c-fmrzq\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.601653 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/906223d4-a28a-4317-bf9f-8513b8c8aa3c-client-ca\") pod \"route-controller-manager-f44897b5c-fmrzq\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.601681 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6s6q\" (UniqueName: \"kubernetes.io/projected/906223d4-a28a-4317-bf9f-8513b8c8aa3c-kube-api-access-c6s6q\") pod \"route-controller-manager-f44897b5c-fmrzq\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.601709 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906223d4-a28a-4317-bf9f-8513b8c8aa3c-config\") pod \"route-controller-manager-f44897b5c-fmrzq\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.674361 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.687500 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.703425 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/906223d4-a28a-4317-bf9f-8513b8c8aa3c-serving-cert\") pod \"route-controller-manager-f44897b5c-fmrzq\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.703771 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/906223d4-a28a-4317-bf9f-8513b8c8aa3c-client-ca\") pod \"route-controller-manager-f44897b5c-fmrzq\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.703869 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6s6q\" (UniqueName: \"kubernetes.io/projected/906223d4-a28a-4317-bf9f-8513b8c8aa3c-kube-api-access-c6s6q\") pod \"route-controller-manager-f44897b5c-fmrzq\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.703995 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906223d4-a28a-4317-bf9f-8513b8c8aa3c-config\") pod \"route-controller-manager-f44897b5c-fmrzq\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.705028 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/906223d4-a28a-4317-bf9f-8513b8c8aa3c-client-ca\") pod \"route-controller-manager-f44897b5c-fmrzq\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.705211 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906223d4-a28a-4317-bf9f-8513b8c8aa3c-config\") pod \"route-controller-manager-f44897b5c-fmrzq\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.710201 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/906223d4-a28a-4317-bf9f-8513b8c8aa3c-serving-cert\") pod \"route-controller-manager-f44897b5c-fmrzq\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.721112 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6s6q\" (UniqueName: \"kubernetes.io/projected/906223d4-a28a-4317-bf9f-8513b8c8aa3c-kube-api-access-c6s6q\") pod \"route-controller-manager-f44897b5c-fmrzq\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:10 crc kubenswrapper[4893]: I0128 15:07:10.825171 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:11 crc kubenswrapper[4893]: I0128 15:07:11.289162 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq"] Jan 28 15:07:11 crc kubenswrapper[4893]: W0128 15:07:11.296510 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod906223d4_a28a_4317_bf9f_8513b8c8aa3c.slice/crio-176d2d3627bb751c07ffff9b1b3f9e2bd944fab143fe37032e661d80a3398dc2 WatchSource:0}: Error finding container 176d2d3627bb751c07ffff9b1b3f9e2bd944fab143fe37032e661d80a3398dc2: Status 404 returned error can't find the container with id 176d2d3627bb751c07ffff9b1b3f9e2bd944fab143fe37032e661d80a3398dc2 Jan 28 15:07:11 crc kubenswrapper[4893]: I0128 15:07:11.322526 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" event={"ID":"906223d4-a28a-4317-bf9f-8513b8c8aa3c","Type":"ContainerStarted","Data":"176d2d3627bb751c07ffff9b1b3f9e2bd944fab143fe37032e661d80a3398dc2"} Jan 28 15:07:12 crc kubenswrapper[4893]: I0128 15:07:12.330158 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" event={"ID":"906223d4-a28a-4317-bf9f-8513b8c8aa3c","Type":"ContainerStarted","Data":"5806b435dd7b9f94c2d0580d160c88b2aa4864fe2890173946b6c0247adf550a"} Jan 28 15:07:12 crc kubenswrapper[4893]: I0128 15:07:12.352362 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" podStartSLOduration=6.352330749 podStartE2EDuration="6.352330749s" podCreationTimestamp="2026-01-28 15:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:07:12.346670548 +0000 UTC m=+350.120285596" watchObservedRunningTime="2026-01-28 15:07:12.352330749 +0000 UTC m=+350.125945797" Jan 28 15:07:12 crc kubenswrapper[4893]: I0128 15:07:12.422911 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 15:07:13 crc kubenswrapper[4893]: I0128 15:07:13.335524 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:13 crc kubenswrapper[4893]: I0128 15:07:13.341086 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:16 crc kubenswrapper[4893]: I0128 15:07:16.317671 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 15:07:16 crc kubenswrapper[4893]: I0128 15:07:16.456348 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 15:07:26 crc kubenswrapper[4893]: I0128 15:07:26.596634 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b4c755c99-r5824"] Jan 28 15:07:26 crc kubenswrapper[4893]: I0128 15:07:26.597601 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" podUID="831a3335-0187-40fa-bc8b-df4df6b51c4c" containerName="controller-manager" containerID="cri-o://b37d9e3637b1953cfc4cc3b86ab0b337a28e357402e48e831cdd9dd76f741768" gracePeriod=30 Jan 28 15:07:26 crc kubenswrapper[4893]: I0128 15:07:26.688007 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq"] Jan 28 15:07:26 crc kubenswrapper[4893]: I0128 15:07:26.688276 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" podUID="906223d4-a28a-4317-bf9f-8513b8c8aa3c" containerName="route-controller-manager" containerID="cri-o://5806b435dd7b9f94c2d0580d160c88b2aa4864fe2890173946b6c0247adf550a" gracePeriod=30 Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.222490 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.243684 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906223d4-a28a-4317-bf9f-8513b8c8aa3c-config\") pod \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.243774 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/906223d4-a28a-4317-bf9f-8513b8c8aa3c-serving-cert\") pod \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.243824 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/906223d4-a28a-4317-bf9f-8513b8c8aa3c-client-ca\") pod \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.243915 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6s6q\" (UniqueName: \"kubernetes.io/projected/906223d4-a28a-4317-bf9f-8513b8c8aa3c-kube-api-access-c6s6q\") pod \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\" (UID: \"906223d4-a28a-4317-bf9f-8513b8c8aa3c\") " Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.244829 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/906223d4-a28a-4317-bf9f-8513b8c8aa3c-config" (OuterVolumeSpecName: "config") pod "906223d4-a28a-4317-bf9f-8513b8c8aa3c" (UID: "906223d4-a28a-4317-bf9f-8513b8c8aa3c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.244850 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/906223d4-a28a-4317-bf9f-8513b8c8aa3c-client-ca" (OuterVolumeSpecName: "client-ca") pod "906223d4-a28a-4317-bf9f-8513b8c8aa3c" (UID: "906223d4-a28a-4317-bf9f-8513b8c8aa3c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.251292 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906223d4-a28a-4317-bf9f-8513b8c8aa3c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "906223d4-a28a-4317-bf9f-8513b8c8aa3c" (UID: "906223d4-a28a-4317-bf9f-8513b8c8aa3c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.261701 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906223d4-a28a-4317-bf9f-8513b8c8aa3c-kube-api-access-c6s6q" (OuterVolumeSpecName: "kube-api-access-c6s6q") pod "906223d4-a28a-4317-bf9f-8513b8c8aa3c" (UID: "906223d4-a28a-4317-bf9f-8513b8c8aa3c"). InnerVolumeSpecName "kube-api-access-c6s6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.293588 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.345874 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-client-ca\") pod \"831a3335-0187-40fa-bc8b-df4df6b51c4c\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.345956 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8smvn\" (UniqueName: \"kubernetes.io/projected/831a3335-0187-40fa-bc8b-df4df6b51c4c-kube-api-access-8smvn\") pod \"831a3335-0187-40fa-bc8b-df4df6b51c4c\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.346219 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/831a3335-0187-40fa-bc8b-df4df6b51c4c-serving-cert\") pod \"831a3335-0187-40fa-bc8b-df4df6b51c4c\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.346272 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-config\") pod \"831a3335-0187-40fa-bc8b-df4df6b51c4c\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.346307 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-proxy-ca-bundles\") pod \"831a3335-0187-40fa-bc8b-df4df6b51c4c\" (UID: \"831a3335-0187-40fa-bc8b-df4df6b51c4c\") " Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.346639 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906223d4-a28a-4317-bf9f-8513b8c8aa3c-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.346662 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/906223d4-a28a-4317-bf9f-8513b8c8aa3c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.346675 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/906223d4-a28a-4317-bf9f-8513b8c8aa3c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.346690 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6s6q\" (UniqueName: \"kubernetes.io/projected/906223d4-a28a-4317-bf9f-8513b8c8aa3c-kube-api-access-c6s6q\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.347295 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "831a3335-0187-40fa-bc8b-df4df6b51c4c" (UID: "831a3335-0187-40fa-bc8b-df4df6b51c4c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.347616 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-config" (OuterVolumeSpecName: "config") pod "831a3335-0187-40fa-bc8b-df4df6b51c4c" (UID: "831a3335-0187-40fa-bc8b-df4df6b51c4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.347886 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-client-ca" (OuterVolumeSpecName: "client-ca") pod "831a3335-0187-40fa-bc8b-df4df6b51c4c" (UID: "831a3335-0187-40fa-bc8b-df4df6b51c4c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.351106 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/831a3335-0187-40fa-bc8b-df4df6b51c4c-kube-api-access-8smvn" (OuterVolumeSpecName: "kube-api-access-8smvn") pod "831a3335-0187-40fa-bc8b-df4df6b51c4c" (UID: "831a3335-0187-40fa-bc8b-df4df6b51c4c"). InnerVolumeSpecName "kube-api-access-8smvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.351168 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/831a3335-0187-40fa-bc8b-df4df6b51c4c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "831a3335-0187-40fa-bc8b-df4df6b51c4c" (UID: "831a3335-0187-40fa-bc8b-df4df6b51c4c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.426242 4893 generic.go:334] "Generic (PLEG): container finished" podID="831a3335-0187-40fa-bc8b-df4df6b51c4c" containerID="b37d9e3637b1953cfc4cc3b86ab0b337a28e357402e48e831cdd9dd76f741768" exitCode=0 Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.426324 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" event={"ID":"831a3335-0187-40fa-bc8b-df4df6b51c4c","Type":"ContainerDied","Data":"b37d9e3637b1953cfc4cc3b86ab0b337a28e357402e48e831cdd9dd76f741768"} Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.426342 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.426372 4893 scope.go:117] "RemoveContainer" containerID="b37d9e3637b1953cfc4cc3b86ab0b337a28e357402e48e831cdd9dd76f741768" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.426358 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4c755c99-r5824" event={"ID":"831a3335-0187-40fa-bc8b-df4df6b51c4c","Type":"ContainerDied","Data":"3854f801f986f3060e8a6a40252ebdfb5ef290a022efc884f6ee3d9db83af70f"} Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.428630 4893 generic.go:334] "Generic (PLEG): container finished" podID="906223d4-a28a-4317-bf9f-8513b8c8aa3c" containerID="5806b435dd7b9f94c2d0580d160c88b2aa4864fe2890173946b6c0247adf550a" exitCode=0 Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.428703 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.428706 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" event={"ID":"906223d4-a28a-4317-bf9f-8513b8c8aa3c","Type":"ContainerDied","Data":"5806b435dd7b9f94c2d0580d160c88b2aa4864fe2890173946b6c0247adf550a"} Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.429777 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq" event={"ID":"906223d4-a28a-4317-bf9f-8513b8c8aa3c","Type":"ContainerDied","Data":"176d2d3627bb751c07ffff9b1b3f9e2bd944fab143fe37032e661d80a3398dc2"} Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.448832 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.448871 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8smvn\" (UniqueName: \"kubernetes.io/projected/831a3335-0187-40fa-bc8b-df4df6b51c4c-kube-api-access-8smvn\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.448885 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/831a3335-0187-40fa-bc8b-df4df6b51c4c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.448894 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.448904 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/831a3335-0187-40fa-bc8b-df4df6b51c4c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.449196 4893 scope.go:117] "RemoveContainer" containerID="b37d9e3637b1953cfc4cc3b86ab0b337a28e357402e48e831cdd9dd76f741768" Jan 28 15:07:27 crc kubenswrapper[4893]: E0128 15:07:27.450099 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b37d9e3637b1953cfc4cc3b86ab0b337a28e357402e48e831cdd9dd76f741768\": container with ID starting with b37d9e3637b1953cfc4cc3b86ab0b337a28e357402e48e831cdd9dd76f741768 not found: ID does not exist" containerID="b37d9e3637b1953cfc4cc3b86ab0b337a28e357402e48e831cdd9dd76f741768" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.450439 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b37d9e3637b1953cfc4cc3b86ab0b337a28e357402e48e831cdd9dd76f741768"} err="failed to get container status \"b37d9e3637b1953cfc4cc3b86ab0b337a28e357402e48e831cdd9dd76f741768\": rpc error: code = NotFound desc = could not find container \"b37d9e3637b1953cfc4cc3b86ab0b337a28e357402e48e831cdd9dd76f741768\": container with ID starting with b37d9e3637b1953cfc4cc3b86ab0b337a28e357402e48e831cdd9dd76f741768 not found: ID does not exist" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.450500 4893 scope.go:117] "RemoveContainer" containerID="5806b435dd7b9f94c2d0580d160c88b2aa4864fe2890173946b6c0247adf550a" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.473646 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b4c755c99-r5824"] Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.481136 4893 scope.go:117] "RemoveContainer" containerID="5806b435dd7b9f94c2d0580d160c88b2aa4864fe2890173946b6c0247adf550a" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.481251 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b4c755c99-r5824"] Jan 28 15:07:27 crc kubenswrapper[4893]: E0128 15:07:27.481961 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5806b435dd7b9f94c2d0580d160c88b2aa4864fe2890173946b6c0247adf550a\": container with ID starting with 5806b435dd7b9f94c2d0580d160c88b2aa4864fe2890173946b6c0247adf550a not found: ID does not exist" containerID="5806b435dd7b9f94c2d0580d160c88b2aa4864fe2890173946b6c0247adf550a" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.482004 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5806b435dd7b9f94c2d0580d160c88b2aa4864fe2890173946b6c0247adf550a"} err="failed to get container status \"5806b435dd7b9f94c2d0580d160c88b2aa4864fe2890173946b6c0247adf550a\": rpc error: code = NotFound desc = could not find container \"5806b435dd7b9f94c2d0580d160c88b2aa4864fe2890173946b6c0247adf550a\": container with ID starting with 5806b435dd7b9f94c2d0580d160c88b2aa4864fe2890173946b6c0247adf550a not found: ID does not exist" Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.496803 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq"] Jan 28 15:07:27 crc kubenswrapper[4893]: I0128 15:07:27.501257 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f44897b5c-fmrzq"] Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.518937 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm"] Jan 28 15:07:28 crc kubenswrapper[4893]: E0128 15:07:28.519675 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906223d4-a28a-4317-bf9f-8513b8c8aa3c" containerName="route-controller-manager" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.519695 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="906223d4-a28a-4317-bf9f-8513b8c8aa3c" containerName="route-controller-manager" Jan 28 15:07:28 crc kubenswrapper[4893]: E0128 15:07:28.519711 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="831a3335-0187-40fa-bc8b-df4df6b51c4c" containerName="controller-manager" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.519738 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="831a3335-0187-40fa-bc8b-df4df6b51c4c" containerName="controller-manager" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.519903 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="831a3335-0187-40fa-bc8b-df4df6b51c4c" containerName="controller-manager" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.519918 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="906223d4-a28a-4317-bf9f-8513b8c8aa3c" containerName="route-controller-manager" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.520675 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.523754 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t"] Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.524843 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.524916 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.524928 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.525877 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.525928 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.528057 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.529081 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.529123 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.529283 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.529324 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.529621 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.529769 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.529866 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.537948 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.546050 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm"] Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.552124 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t"] Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.563540 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-config\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.563595 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-proxy-ca-bundles\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.563669 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-client-ca\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.563746 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qkhx\" (UniqueName: \"kubernetes.io/projected/604cd931-84ed-4955-99a5-5a126c1f2950-kube-api-access-5qkhx\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.563786 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-config\") pod \"route-controller-manager-5f5d69cd77-4hlgm\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.563814 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-client-ca\") pod \"route-controller-manager-5f5d69cd77-4hlgm\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.563880 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-serving-cert\") pod \"route-controller-manager-5f5d69cd77-4hlgm\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.563955 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/604cd931-84ed-4955-99a5-5a126c1f2950-serving-cert\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.564008 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5wlv\" (UniqueName: \"kubernetes.io/projected/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-kube-api-access-r5wlv\") pod \"route-controller-manager-5f5d69cd77-4hlgm\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.666019 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-client-ca\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.666118 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qkhx\" (UniqueName: \"kubernetes.io/projected/604cd931-84ed-4955-99a5-5a126c1f2950-kube-api-access-5qkhx\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.666159 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-config\") pod \"route-controller-manager-5f5d69cd77-4hlgm\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.666192 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-client-ca\") pod \"route-controller-manager-5f5d69cd77-4hlgm\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.666225 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-serving-cert\") pod \"route-controller-manager-5f5d69cd77-4hlgm\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.666259 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/604cd931-84ed-4955-99a5-5a126c1f2950-serving-cert\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.666283 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5wlv\" (UniqueName: \"kubernetes.io/projected/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-kube-api-access-r5wlv\") pod \"route-controller-manager-5f5d69cd77-4hlgm\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.666321 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-config\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.666353 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-proxy-ca-bundles\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.667715 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-client-ca\") pod \"route-controller-manager-5f5d69cd77-4hlgm\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.667798 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-config\") pod \"route-controller-manager-5f5d69cd77-4hlgm\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.668287 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-config\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.668318 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-client-ca\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.668307 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-proxy-ca-bundles\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.671672 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-serving-cert\") pod \"route-controller-manager-5f5d69cd77-4hlgm\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.679061 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/604cd931-84ed-4955-99a5-5a126c1f2950-serving-cert\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.684696 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5wlv\" (UniqueName: \"kubernetes.io/projected/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-kube-api-access-r5wlv\") pod \"route-controller-manager-5f5d69cd77-4hlgm\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.689757 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qkhx\" (UniqueName: \"kubernetes.io/projected/604cd931-84ed-4955-99a5-5a126c1f2950-kube-api-access-5qkhx\") pod \"controller-manager-67fbdd65b9-mtq9t\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.849735 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.866165 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.900128 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="831a3335-0187-40fa-bc8b-df4df6b51c4c" path="/var/lib/kubelet/pods/831a3335-0187-40fa-bc8b-df4df6b51c4c/volumes" Jan 28 15:07:28 crc kubenswrapper[4893]: I0128 15:07:28.901060 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="906223d4-a28a-4317-bf9f-8513b8c8aa3c" path="/var/lib/kubelet/pods/906223d4-a28a-4317-bf9f-8513b8c8aa3c/volumes" Jan 28 15:07:29 crc kubenswrapper[4893]: I0128 15:07:29.146649 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t"] Jan 28 15:07:29 crc kubenswrapper[4893]: I0128 15:07:29.283887 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm"] Jan 28 15:07:29 crc kubenswrapper[4893]: W0128 15:07:29.288840 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a90d22b_0b24_42a5_81cd_6cd43d7fc822.slice/crio-0f5fa98b69714487e7d174b45738c44f2a4b6a051da64391a81766b0682010c3 WatchSource:0}: Error finding container 0f5fa98b69714487e7d174b45738c44f2a4b6a051da64391a81766b0682010c3: Status 404 returned error can't find the container with id 0f5fa98b69714487e7d174b45738c44f2a4b6a051da64391a81766b0682010c3 Jan 28 15:07:29 crc kubenswrapper[4893]: I0128 15:07:29.452962 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" event={"ID":"604cd931-84ed-4955-99a5-5a126c1f2950","Type":"ContainerStarted","Data":"1e4767ebee22777de27138ca60b9eae40564baf1e72c94e0af43e00fb3dc7c2c"} Jan 28 15:07:29 crc kubenswrapper[4893]: I0128 15:07:29.453016 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" event={"ID":"604cd931-84ed-4955-99a5-5a126c1f2950","Type":"ContainerStarted","Data":"c7ca7635ce6cf8f2ab87c079db38964944aa70653e4baed2a88ed1d60d652d8a"} Jan 28 15:07:29 crc kubenswrapper[4893]: I0128 15:07:29.453440 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:29 crc kubenswrapper[4893]: I0128 15:07:29.458276 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" event={"ID":"6a90d22b-0b24-42a5-81cd-6cd43d7fc822","Type":"ContainerStarted","Data":"032a5ae479a1fb953942cf6091edd2e0892e10d06f8e0021e764d48b19595358"} Jan 28 15:07:29 crc kubenswrapper[4893]: I0128 15:07:29.458325 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" event={"ID":"6a90d22b-0b24-42a5-81cd-6cd43d7fc822","Type":"ContainerStarted","Data":"0f5fa98b69714487e7d174b45738c44f2a4b6a051da64391a81766b0682010c3"} Jan 28 15:07:29 crc kubenswrapper[4893]: I0128 15:07:29.458585 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:29 crc kubenswrapper[4893]: I0128 15:07:29.459057 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:07:29 crc kubenswrapper[4893]: I0128 15:07:29.459771 4893 patch_prober.go:28] interesting pod/route-controller-manager-5f5d69cd77-4hlgm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 28 15:07:29 crc kubenswrapper[4893]: I0128 15:07:29.459811 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" podUID="6a90d22b-0b24-42a5-81cd-6cd43d7fc822" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 28 15:07:29 crc kubenswrapper[4893]: I0128 15:07:29.472352 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" podStartSLOduration=3.472320071 podStartE2EDuration="3.472320071s" podCreationTimestamp="2026-01-28 15:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:07:29.469575813 +0000 UTC m=+367.243190851" watchObservedRunningTime="2026-01-28 15:07:29.472320071 +0000 UTC m=+367.245935099" Jan 28 15:07:30 crc kubenswrapper[4893]: I0128 15:07:30.477711 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:30 crc kubenswrapper[4893]: I0128 15:07:30.496724 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" podStartSLOduration=4.496703544 podStartE2EDuration="4.496703544s" podCreationTimestamp="2026-01-28 15:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:07:29.508952582 +0000 UTC m=+367.282567620" watchObservedRunningTime="2026-01-28 15:07:30.496703544 +0000 UTC m=+368.270318572" Jan 28 15:07:35 crc kubenswrapper[4893]: I0128 15:07:35.723154 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:07:35 crc kubenswrapper[4893]: I0128 15:07:35.725728 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.031332 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2wp6"] Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.031873 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s2wp6" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" containerName="registry-server" containerID="cri-o://359accb6acde26f42c238b20b166e78083674f8c2a6955d24a10d74da3342acf" gracePeriod=2 Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.227967 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-77mgk"] Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.527340 4893 generic.go:334] "Generic (PLEG): container finished" podID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" containerID="359accb6acde26f42c238b20b166e78083674f8c2a6955d24a10d74da3342acf" exitCode=0 Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.527428 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2wp6" event={"ID":"c1d61ecd-2c35-4e84-85db-9ebe350850a6","Type":"ContainerDied","Data":"359accb6acde26f42c238b20b166e78083674f8c2a6955d24a10d74da3342acf"} Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.527625 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-77mgk" podUID="f9efa33f-313e-484f-967c-1d829b6f8250" containerName="registry-server" containerID="cri-o://afafc5badef04b51797b6a729a96eb155267dd750e94cd4abf3ecbd5f14568ac" gracePeriod=2 Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.680157 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.749439 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d61ecd-2c35-4e84-85db-9ebe350850a6-utilities\") pod \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\" (UID: \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\") " Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.749645 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjqm8\" (UniqueName: \"kubernetes.io/projected/c1d61ecd-2c35-4e84-85db-9ebe350850a6-kube-api-access-fjqm8\") pod \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\" (UID: \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\") " Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.749781 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d61ecd-2c35-4e84-85db-9ebe350850a6-catalog-content\") pod \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\" (UID: \"c1d61ecd-2c35-4e84-85db-9ebe350850a6\") " Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.750411 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1d61ecd-2c35-4e84-85db-9ebe350850a6-utilities" (OuterVolumeSpecName: "utilities") pod "c1d61ecd-2c35-4e84-85db-9ebe350850a6" (UID: "c1d61ecd-2c35-4e84-85db-9ebe350850a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.754679 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1d61ecd-2c35-4e84-85db-9ebe350850a6-kube-api-access-fjqm8" (OuterVolumeSpecName: "kube-api-access-fjqm8") pod "c1d61ecd-2c35-4e84-85db-9ebe350850a6" (UID: "c1d61ecd-2c35-4e84-85db-9ebe350850a6"). InnerVolumeSpecName "kube-api-access-fjqm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.826364 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1d61ecd-2c35-4e84-85db-9ebe350850a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1d61ecd-2c35-4e84-85db-9ebe350850a6" (UID: "c1d61ecd-2c35-4e84-85db-9ebe350850a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.852375 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1d61ecd-2c35-4e84-85db-9ebe350850a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.852431 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1d61ecd-2c35-4e84-85db-9ebe350850a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:40 crc kubenswrapper[4893]: I0128 15:07:40.852445 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjqm8\" (UniqueName: \"kubernetes.io/projected/c1d61ecd-2c35-4e84-85db-9ebe350850a6-kube-api-access-fjqm8\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.100514 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.257269 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9efa33f-313e-484f-967c-1d829b6f8250-utilities\") pod \"f9efa33f-313e-484f-967c-1d829b6f8250\" (UID: \"f9efa33f-313e-484f-967c-1d829b6f8250\") " Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.257450 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shckb\" (UniqueName: \"kubernetes.io/projected/f9efa33f-313e-484f-967c-1d829b6f8250-kube-api-access-shckb\") pod \"f9efa33f-313e-484f-967c-1d829b6f8250\" (UID: \"f9efa33f-313e-484f-967c-1d829b6f8250\") " Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.257520 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9efa33f-313e-484f-967c-1d829b6f8250-catalog-content\") pod \"f9efa33f-313e-484f-967c-1d829b6f8250\" (UID: \"f9efa33f-313e-484f-967c-1d829b6f8250\") " Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.258730 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9efa33f-313e-484f-967c-1d829b6f8250-utilities" (OuterVolumeSpecName: "utilities") pod "f9efa33f-313e-484f-967c-1d829b6f8250" (UID: "f9efa33f-313e-484f-967c-1d829b6f8250"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.263312 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9efa33f-313e-484f-967c-1d829b6f8250-kube-api-access-shckb" (OuterVolumeSpecName: "kube-api-access-shckb") pod "f9efa33f-313e-484f-967c-1d829b6f8250" (UID: "f9efa33f-313e-484f-967c-1d829b6f8250"). InnerVolumeSpecName "kube-api-access-shckb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.304177 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9efa33f-313e-484f-967c-1d829b6f8250-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9efa33f-313e-484f-967c-1d829b6f8250" (UID: "f9efa33f-313e-484f-967c-1d829b6f8250"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.359716 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shckb\" (UniqueName: \"kubernetes.io/projected/f9efa33f-313e-484f-967c-1d829b6f8250-kube-api-access-shckb\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.359754 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9efa33f-313e-484f-967c-1d829b6f8250-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.359765 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9efa33f-313e-484f-967c-1d829b6f8250-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.535409 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s2wp6" event={"ID":"c1d61ecd-2c35-4e84-85db-9ebe350850a6","Type":"ContainerDied","Data":"dce22a04cc5113a4aaa6557eb3f041d22128a9098460fad94fb9791307740f92"} Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.535522 4893 scope.go:117] "RemoveContainer" containerID="359accb6acde26f42c238b20b166e78083674f8c2a6955d24a10d74da3342acf" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.535427 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s2wp6" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.537650 4893 generic.go:334] "Generic (PLEG): container finished" podID="f9efa33f-313e-484f-967c-1d829b6f8250" containerID="afafc5badef04b51797b6a729a96eb155267dd750e94cd4abf3ecbd5f14568ac" exitCode=0 Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.537700 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77mgk" event={"ID":"f9efa33f-313e-484f-967c-1d829b6f8250","Type":"ContainerDied","Data":"afafc5badef04b51797b6a729a96eb155267dd750e94cd4abf3ecbd5f14568ac"} Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.537729 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-77mgk" event={"ID":"f9efa33f-313e-484f-967c-1d829b6f8250","Type":"ContainerDied","Data":"135816f32d58633a9545c028da79b2b096b4458795d53b6ccb6080f4ba4d2db6"} Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.537796 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-77mgk" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.550271 4893 scope.go:117] "RemoveContainer" containerID="0572a0380a6ec2de2ae53477eec9f2f41a3b6ad599c48e6c96604e120c17685a" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.571084 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s2wp6"] Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.575203 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s2wp6"] Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.584651 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-77mgk"] Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.588977 4893 scope.go:117] "RemoveContainer" containerID="17e2f7a97d4ce620fadc3a513acd44774acbc6c71ae39715aec815803a69046d" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.590087 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-77mgk"] Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.605681 4893 scope.go:117] "RemoveContainer" containerID="afafc5badef04b51797b6a729a96eb155267dd750e94cd4abf3ecbd5f14568ac" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.624150 4893 scope.go:117] "RemoveContainer" containerID="72abe041597c6df0b2391a6e053b221bdb6eac1404fcf6a17287f24e73f3a86e" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.641410 4893 scope.go:117] "RemoveContainer" containerID="f4be2056952fd7894303c984968317ba819ffc48c1263fb5e8d78d024dcf4a79" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.670634 4893 scope.go:117] "RemoveContainer" containerID="afafc5badef04b51797b6a729a96eb155267dd750e94cd4abf3ecbd5f14568ac" Jan 28 15:07:41 crc kubenswrapper[4893]: E0128 15:07:41.671406 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afafc5badef04b51797b6a729a96eb155267dd750e94cd4abf3ecbd5f14568ac\": container with ID starting with afafc5badef04b51797b6a729a96eb155267dd750e94cd4abf3ecbd5f14568ac not found: ID does not exist" containerID="afafc5badef04b51797b6a729a96eb155267dd750e94cd4abf3ecbd5f14568ac" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.671486 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afafc5badef04b51797b6a729a96eb155267dd750e94cd4abf3ecbd5f14568ac"} err="failed to get container status \"afafc5badef04b51797b6a729a96eb155267dd750e94cd4abf3ecbd5f14568ac\": rpc error: code = NotFound desc = could not find container \"afafc5badef04b51797b6a729a96eb155267dd750e94cd4abf3ecbd5f14568ac\": container with ID starting with afafc5badef04b51797b6a729a96eb155267dd750e94cd4abf3ecbd5f14568ac not found: ID does not exist" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.671537 4893 scope.go:117] "RemoveContainer" containerID="72abe041597c6df0b2391a6e053b221bdb6eac1404fcf6a17287f24e73f3a86e" Jan 28 15:07:41 crc kubenswrapper[4893]: E0128 15:07:41.672143 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72abe041597c6df0b2391a6e053b221bdb6eac1404fcf6a17287f24e73f3a86e\": container with ID starting with 72abe041597c6df0b2391a6e053b221bdb6eac1404fcf6a17287f24e73f3a86e not found: ID does not exist" containerID="72abe041597c6df0b2391a6e053b221bdb6eac1404fcf6a17287f24e73f3a86e" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.672193 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72abe041597c6df0b2391a6e053b221bdb6eac1404fcf6a17287f24e73f3a86e"} err="failed to get container status \"72abe041597c6df0b2391a6e053b221bdb6eac1404fcf6a17287f24e73f3a86e\": rpc error: code = NotFound desc = could not find container \"72abe041597c6df0b2391a6e053b221bdb6eac1404fcf6a17287f24e73f3a86e\": container with ID starting with 72abe041597c6df0b2391a6e053b221bdb6eac1404fcf6a17287f24e73f3a86e not found: ID does not exist" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.672228 4893 scope.go:117] "RemoveContainer" containerID="f4be2056952fd7894303c984968317ba819ffc48c1263fb5e8d78d024dcf4a79" Jan 28 15:07:41 crc kubenswrapper[4893]: E0128 15:07:41.672745 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4be2056952fd7894303c984968317ba819ffc48c1263fb5e8d78d024dcf4a79\": container with ID starting with f4be2056952fd7894303c984968317ba819ffc48c1263fb5e8d78d024dcf4a79 not found: ID does not exist" containerID="f4be2056952fd7894303c984968317ba819ffc48c1263fb5e8d78d024dcf4a79" Jan 28 15:07:41 crc kubenswrapper[4893]: I0128 15:07:41.672798 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4be2056952fd7894303c984968317ba819ffc48c1263fb5e8d78d024dcf4a79"} err="failed to get container status \"f4be2056952fd7894303c984968317ba819ffc48c1263fb5e8d78d024dcf4a79\": rpc error: code = NotFound desc = could not find container \"f4be2056952fd7894303c984968317ba819ffc48c1263fb5e8d78d024dcf4a79\": container with ID starting with f4be2056952fd7894303c984968317ba819ffc48c1263fb5e8d78d024dcf4a79 not found: ID does not exist" Jan 28 15:07:42 crc kubenswrapper[4893]: I0128 15:07:42.628292 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mtslh"] Jan 28 15:07:42 crc kubenswrapper[4893]: I0128 15:07:42.628568 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mtslh" podUID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" containerName="registry-server" containerID="cri-o://716d2f54a411be1789b1a32a0d2d9c3de0cdb66d92d591702ce45b778af55a6c" gracePeriod=2 Jan 28 15:07:42 crc kubenswrapper[4893]: I0128 15:07:42.899817 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" path="/var/lib/kubelet/pods/c1d61ecd-2c35-4e84-85db-9ebe350850a6/volumes" Jan 28 15:07:42 crc kubenswrapper[4893]: I0128 15:07:42.901022 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9efa33f-313e-484f-967c-1d829b6f8250" path="/var/lib/kubelet/pods/f9efa33f-313e-484f-967c-1d829b6f8250/volumes" Jan 28 15:07:43 crc kubenswrapper[4893]: I0128 15:07:43.553618 4893 generic.go:334] "Generic (PLEG): container finished" podID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" containerID="716d2f54a411be1789b1a32a0d2d9c3de0cdb66d92d591702ce45b778af55a6c" exitCode=0 Jan 28 15:07:43 crc kubenswrapper[4893]: I0128 15:07:43.553877 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtslh" event={"ID":"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12","Type":"ContainerDied","Data":"716d2f54a411be1789b1a32a0d2d9c3de0cdb66d92d591702ce45b778af55a6c"} Jan 28 15:07:43 crc kubenswrapper[4893]: I0128 15:07:43.780596 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:07:43 crc kubenswrapper[4893]: I0128 15:07:43.899859 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-utilities\") pod \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\" (UID: \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\") " Jan 28 15:07:43 crc kubenswrapper[4893]: I0128 15:07:43.899931 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-catalog-content\") pod \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\" (UID: \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\") " Jan 28 15:07:43 crc kubenswrapper[4893]: I0128 15:07:43.900030 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll2rd\" (UniqueName: \"kubernetes.io/projected/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-kube-api-access-ll2rd\") pod \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\" (UID: \"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12\") " Jan 28 15:07:43 crc kubenswrapper[4893]: I0128 15:07:43.901006 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-utilities" (OuterVolumeSpecName: "utilities") pod "0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" (UID: "0c2ed13a-5aec-42ad-80a0-1ee315e4fb12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:07:43 crc kubenswrapper[4893]: I0128 15:07:43.906651 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-kube-api-access-ll2rd" (OuterVolumeSpecName: "kube-api-access-ll2rd") pod "0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" (UID: "0c2ed13a-5aec-42ad-80a0-1ee315e4fb12"). InnerVolumeSpecName "kube-api-access-ll2rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:07:44 crc kubenswrapper[4893]: I0128 15:07:44.008336 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:44 crc kubenswrapper[4893]: I0128 15:07:44.008890 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll2rd\" (UniqueName: \"kubernetes.io/projected/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-kube-api-access-ll2rd\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:44 crc kubenswrapper[4893]: I0128 15:07:44.027449 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" (UID: "0c2ed13a-5aec-42ad-80a0-1ee315e4fb12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:07:44 crc kubenswrapper[4893]: I0128 15:07:44.110738 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:44 crc kubenswrapper[4893]: I0128 15:07:44.561999 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtslh" event={"ID":"0c2ed13a-5aec-42ad-80a0-1ee315e4fb12","Type":"ContainerDied","Data":"fc599c836f27b10ed49b9c866cb202f62b879aafe766ee6004b6a7667c2425ed"} Jan 28 15:07:44 crc kubenswrapper[4893]: I0128 15:07:44.562075 4893 scope.go:117] "RemoveContainer" containerID="716d2f54a411be1789b1a32a0d2d9c3de0cdb66d92d591702ce45b778af55a6c" Jan 28 15:07:44 crc kubenswrapper[4893]: I0128 15:07:44.562110 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtslh" Jan 28 15:07:44 crc kubenswrapper[4893]: I0128 15:07:44.595028 4893 scope.go:117] "RemoveContainer" containerID="616e13dce44cad0ae55ffbed0a8f8195bb6f01d1875a5a34ea3ae09453c2331b" Jan 28 15:07:44 crc kubenswrapper[4893]: I0128 15:07:44.598839 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mtslh"] Jan 28 15:07:44 crc kubenswrapper[4893]: I0128 15:07:44.601530 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mtslh"] Jan 28 15:07:44 crc kubenswrapper[4893]: I0128 15:07:44.616119 4893 scope.go:117] "RemoveContainer" containerID="479569e987536e89a1faf6d1fb3540b92e412f8dbd1efab3f6777b205c48f9ad" Jan 28 15:07:44 crc kubenswrapper[4893]: I0128 15:07:44.899106 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" path="/var/lib/kubelet/pods/0c2ed13a-5aec-42ad-80a0-1ee315e4fb12/volumes" Jan 28 15:07:46 crc kubenswrapper[4893]: I0128 15:07:46.598773 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm"] Jan 28 15:07:46 crc kubenswrapper[4893]: I0128 15:07:46.600123 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" podUID="6a90d22b-0b24-42a5-81cd-6cd43d7fc822" containerName="route-controller-manager" containerID="cri-o://032a5ae479a1fb953942cf6091edd2e0892e10d06f8e0021e764d48b19595358" gracePeriod=30 Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.572675 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.580893 4893 generic.go:334] "Generic (PLEG): container finished" podID="6a90d22b-0b24-42a5-81cd-6cd43d7fc822" containerID="032a5ae479a1fb953942cf6091edd2e0892e10d06f8e0021e764d48b19595358" exitCode=0 Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.580931 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" event={"ID":"6a90d22b-0b24-42a5-81cd-6cd43d7fc822","Type":"ContainerDied","Data":"032a5ae479a1fb953942cf6091edd2e0892e10d06f8e0021e764d48b19595358"} Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.580960 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" event={"ID":"6a90d22b-0b24-42a5-81cd-6cd43d7fc822","Type":"ContainerDied","Data":"0f5fa98b69714487e7d174b45738c44f2a4b6a051da64391a81766b0682010c3"} Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.580976 4893 scope.go:117] "RemoveContainer" containerID="032a5ae479a1fb953942cf6091edd2e0892e10d06f8e0021e764d48b19595358" Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.581034 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm" Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.596926 4893 scope.go:117] "RemoveContainer" containerID="032a5ae479a1fb953942cf6091edd2e0892e10d06f8e0021e764d48b19595358" Jan 28 15:07:47 crc kubenswrapper[4893]: E0128 15:07:47.597525 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"032a5ae479a1fb953942cf6091edd2e0892e10d06f8e0021e764d48b19595358\": container with ID starting with 032a5ae479a1fb953942cf6091edd2e0892e10d06f8e0021e764d48b19595358 not found: ID does not exist" containerID="032a5ae479a1fb953942cf6091edd2e0892e10d06f8e0021e764d48b19595358" Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.597566 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"032a5ae479a1fb953942cf6091edd2e0892e10d06f8e0021e764d48b19595358"} err="failed to get container status \"032a5ae479a1fb953942cf6091edd2e0892e10d06f8e0021e764d48b19595358\": rpc error: code = NotFound desc = could not find container \"032a5ae479a1fb953942cf6091edd2e0892e10d06f8e0021e764d48b19595358\": container with ID starting with 032a5ae479a1fb953942cf6091edd2e0892e10d06f8e0021e764d48b19595358 not found: ID does not exist" Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.761137 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-client-ca\") pod \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.761192 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-config\") pod \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.761221 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5wlv\" (UniqueName: \"kubernetes.io/projected/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-kube-api-access-r5wlv\") pod \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.761357 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-serving-cert\") pod \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\" (UID: \"6a90d22b-0b24-42a5-81cd-6cd43d7fc822\") " Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.762388 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-client-ca" (OuterVolumeSpecName: "client-ca") pod "6a90d22b-0b24-42a5-81cd-6cd43d7fc822" (UID: "6a90d22b-0b24-42a5-81cd-6cd43d7fc822"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.762409 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-config" (OuterVolumeSpecName: "config") pod "6a90d22b-0b24-42a5-81cd-6cd43d7fc822" (UID: "6a90d22b-0b24-42a5-81cd-6cd43d7fc822"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.762961 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.762987 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.768670 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-kube-api-access-r5wlv" (OuterVolumeSpecName: "kube-api-access-r5wlv") pod "6a90d22b-0b24-42a5-81cd-6cd43d7fc822" (UID: "6a90d22b-0b24-42a5-81cd-6cd43d7fc822"). InnerVolumeSpecName "kube-api-access-r5wlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.773447 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6a90d22b-0b24-42a5-81cd-6cd43d7fc822" (UID: "6a90d22b-0b24-42a5-81cd-6cd43d7fc822"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.863822 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5wlv\" (UniqueName: \"kubernetes.io/projected/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-kube-api-access-r5wlv\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.863862 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a90d22b-0b24-42a5-81cd-6cd43d7fc822-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.915609 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm"] Jan 28 15:07:47 crc kubenswrapper[4893]: I0128 15:07:47.918519 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f5d69cd77-4hlgm"] Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.538670 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz"] Jan 28 15:07:48 crc kubenswrapper[4893]: E0128 15:07:48.539401 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" containerName="extract-utilities" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.539417 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" containerName="extract-utilities" Jan 28 15:07:48 crc kubenswrapper[4893]: E0128 15:07:48.539428 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" containerName="extract-utilities" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.539434 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" containerName="extract-utilities" Jan 28 15:07:48 crc kubenswrapper[4893]: E0128 15:07:48.539447 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" containerName="extract-content" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.539455 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" containerName="extract-content" Jan 28 15:07:48 crc kubenswrapper[4893]: E0128 15:07:48.539463 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a90d22b-0b24-42a5-81cd-6cd43d7fc822" containerName="route-controller-manager" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.539489 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a90d22b-0b24-42a5-81cd-6cd43d7fc822" containerName="route-controller-manager" Jan 28 15:07:48 crc kubenswrapper[4893]: E0128 15:07:48.539504 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9efa33f-313e-484f-967c-1d829b6f8250" containerName="registry-server" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.539510 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9efa33f-313e-484f-967c-1d829b6f8250" containerName="registry-server" Jan 28 15:07:48 crc kubenswrapper[4893]: E0128 15:07:48.539520 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" containerName="registry-server" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.539527 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" containerName="registry-server" Jan 28 15:07:48 crc kubenswrapper[4893]: E0128 15:07:48.539534 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" containerName="registry-server" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.539541 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" containerName="registry-server" Jan 28 15:07:48 crc kubenswrapper[4893]: E0128 15:07:48.539552 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" containerName="extract-content" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.539560 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" containerName="extract-content" Jan 28 15:07:48 crc kubenswrapper[4893]: E0128 15:07:48.539572 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9efa33f-313e-484f-967c-1d829b6f8250" containerName="extract-utilities" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.539580 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9efa33f-313e-484f-967c-1d829b6f8250" containerName="extract-utilities" Jan 28 15:07:48 crc kubenswrapper[4893]: E0128 15:07:48.539591 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9efa33f-313e-484f-967c-1d829b6f8250" containerName="extract-content" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.539601 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9efa33f-313e-484f-967c-1d829b6f8250" containerName="extract-content" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.539722 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c2ed13a-5aec-42ad-80a0-1ee315e4fb12" containerName="registry-server" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.539733 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9efa33f-313e-484f-967c-1d829b6f8250" containerName="registry-server" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.539745 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a90d22b-0b24-42a5-81cd-6cd43d7fc822" containerName="route-controller-manager" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.539755 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1d61ecd-2c35-4e84-85db-9ebe350850a6" containerName="registry-server" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.540166 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.542253 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.543429 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.543877 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.544238 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.544760 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.546580 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.552969 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz"] Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.675917 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/002f025c-019d-4400-87b0-eebf393a3490-config\") pod \"route-controller-manager-f44897b5c-glfdz\" (UID: \"002f025c-019d-4400-87b0-eebf393a3490\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.676066 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/002f025c-019d-4400-87b0-eebf393a3490-serving-cert\") pod \"route-controller-manager-f44897b5c-glfdz\" (UID: \"002f025c-019d-4400-87b0-eebf393a3490\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.676149 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx99t\" (UniqueName: \"kubernetes.io/projected/002f025c-019d-4400-87b0-eebf393a3490-kube-api-access-qx99t\") pod \"route-controller-manager-f44897b5c-glfdz\" (UID: \"002f025c-019d-4400-87b0-eebf393a3490\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.676237 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/002f025c-019d-4400-87b0-eebf393a3490-client-ca\") pod \"route-controller-manager-f44897b5c-glfdz\" (UID: \"002f025c-019d-4400-87b0-eebf393a3490\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.777685 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/002f025c-019d-4400-87b0-eebf393a3490-config\") pod \"route-controller-manager-f44897b5c-glfdz\" (UID: \"002f025c-019d-4400-87b0-eebf393a3490\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.777834 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/002f025c-019d-4400-87b0-eebf393a3490-serving-cert\") pod \"route-controller-manager-f44897b5c-glfdz\" (UID: \"002f025c-019d-4400-87b0-eebf393a3490\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.777928 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx99t\" (UniqueName: \"kubernetes.io/projected/002f025c-019d-4400-87b0-eebf393a3490-kube-api-access-qx99t\") pod \"route-controller-manager-f44897b5c-glfdz\" (UID: \"002f025c-019d-4400-87b0-eebf393a3490\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.777994 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/002f025c-019d-4400-87b0-eebf393a3490-client-ca\") pod \"route-controller-manager-f44897b5c-glfdz\" (UID: \"002f025c-019d-4400-87b0-eebf393a3490\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.779353 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/002f025c-019d-4400-87b0-eebf393a3490-client-ca\") pod \"route-controller-manager-f44897b5c-glfdz\" (UID: \"002f025c-019d-4400-87b0-eebf393a3490\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.779463 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/002f025c-019d-4400-87b0-eebf393a3490-config\") pod \"route-controller-manager-f44897b5c-glfdz\" (UID: \"002f025c-019d-4400-87b0-eebf393a3490\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.788026 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/002f025c-019d-4400-87b0-eebf393a3490-serving-cert\") pod \"route-controller-manager-f44897b5c-glfdz\" (UID: \"002f025c-019d-4400-87b0-eebf393a3490\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.796469 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx99t\" (UniqueName: \"kubernetes.io/projected/002f025c-019d-4400-87b0-eebf393a3490-kube-api-access-qx99t\") pod \"route-controller-manager-f44897b5c-glfdz\" (UID: \"002f025c-019d-4400-87b0-eebf393a3490\") " pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.862487 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:48 crc kubenswrapper[4893]: I0128 15:07:48.899605 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a90d22b-0b24-42a5-81cd-6cd43d7fc822" path="/var/lib/kubelet/pods/6a90d22b-0b24-42a5-81cd-6cd43d7fc822/volumes" Jan 28 15:07:49 crc kubenswrapper[4893]: I0128 15:07:49.275739 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz"] Jan 28 15:07:49 crc kubenswrapper[4893]: I0128 15:07:49.594487 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" event={"ID":"002f025c-019d-4400-87b0-eebf393a3490","Type":"ContainerStarted","Data":"89ec8b32c70bfcd4a828ebdc6d196f81f123a36f0762dfecc8c2730b914d03a4"} Jan 28 15:07:49 crc kubenswrapper[4893]: I0128 15:07:49.594830 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" event={"ID":"002f025c-019d-4400-87b0-eebf393a3490","Type":"ContainerStarted","Data":"a68f073df4933557eb54bac91dc78fe35d7dae59d61338000a905cbb9e018cda"} Jan 28 15:07:49 crc kubenswrapper[4893]: I0128 15:07:49.594851 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:07:49 crc kubenswrapper[4893]: I0128 15:07:49.617962 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" podStartSLOduration=3.61793231 podStartE2EDuration="3.61793231s" podCreationTimestamp="2026-01-28 15:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:07:49.61332319 +0000 UTC m=+387.386938238" watchObservedRunningTime="2026-01-28 15:07:49.61793231 +0000 UTC m=+387.391547358" Jan 28 15:07:49 crc kubenswrapper[4893]: I0128 15:07:49.792689 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f44897b5c-glfdz" Jan 28 15:08:05 crc kubenswrapper[4893]: I0128 15:08:05.722811 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:08:05 crc kubenswrapper[4893]: I0128 15:08:05.723818 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:08:06 crc kubenswrapper[4893]: I0128 15:08:06.586293 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t"] Jan 28 15:08:06 crc kubenswrapper[4893]: I0128 15:08:06.586588 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" podUID="604cd931-84ed-4955-99a5-5a126c1f2950" containerName="controller-manager" containerID="cri-o://1e4767ebee22777de27138ca60b9eae40564baf1e72c94e0af43e00fb3dc7c2c" gracePeriod=30 Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.037604 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.139805 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-client-ca\") pod \"604cd931-84ed-4955-99a5-5a126c1f2950\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.139914 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-proxy-ca-bundles\") pod \"604cd931-84ed-4955-99a5-5a126c1f2950\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.139986 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-config\") pod \"604cd931-84ed-4955-99a5-5a126c1f2950\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.140034 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qkhx\" (UniqueName: \"kubernetes.io/projected/604cd931-84ed-4955-99a5-5a126c1f2950-kube-api-access-5qkhx\") pod \"604cd931-84ed-4955-99a5-5a126c1f2950\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.140058 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/604cd931-84ed-4955-99a5-5a126c1f2950-serving-cert\") pod \"604cd931-84ed-4955-99a5-5a126c1f2950\" (UID: \"604cd931-84ed-4955-99a5-5a126c1f2950\") " Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.140970 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-client-ca" (OuterVolumeSpecName: "client-ca") pod "604cd931-84ed-4955-99a5-5a126c1f2950" (UID: "604cd931-84ed-4955-99a5-5a126c1f2950"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.141113 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-config" (OuterVolumeSpecName: "config") pod "604cd931-84ed-4955-99a5-5a126c1f2950" (UID: "604cd931-84ed-4955-99a5-5a126c1f2950"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.141236 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "604cd931-84ed-4955-99a5-5a126c1f2950" (UID: "604cd931-84ed-4955-99a5-5a126c1f2950"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.145165 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604cd931-84ed-4955-99a5-5a126c1f2950-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "604cd931-84ed-4955-99a5-5a126c1f2950" (UID: "604cd931-84ed-4955-99a5-5a126c1f2950"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.145650 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604cd931-84ed-4955-99a5-5a126c1f2950-kube-api-access-5qkhx" (OuterVolumeSpecName: "kube-api-access-5qkhx") pod "604cd931-84ed-4955-99a5-5a126c1f2950" (UID: "604cd931-84ed-4955-99a5-5a126c1f2950"). InnerVolumeSpecName "kube-api-access-5qkhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.241183 4893 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.241465 4893 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.241492 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qkhx\" (UniqueName: \"kubernetes.io/projected/604cd931-84ed-4955-99a5-5a126c1f2950-kube-api-access-5qkhx\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.241504 4893 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/604cd931-84ed-4955-99a5-5a126c1f2950-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.241512 4893 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/604cd931-84ed-4955-99a5-5a126c1f2950-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.698248 4893 generic.go:334] "Generic (PLEG): container finished" podID="604cd931-84ed-4955-99a5-5a126c1f2950" containerID="1e4767ebee22777de27138ca60b9eae40564baf1e72c94e0af43e00fb3dc7c2c" exitCode=0 Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.698320 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" event={"ID":"604cd931-84ed-4955-99a5-5a126c1f2950","Type":"ContainerDied","Data":"1e4767ebee22777de27138ca60b9eae40564baf1e72c94e0af43e00fb3dc7c2c"} Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.698371 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" event={"ID":"604cd931-84ed-4955-99a5-5a126c1f2950","Type":"ContainerDied","Data":"c7ca7635ce6cf8f2ab87c079db38964944aa70653e4baed2a88ed1d60d652d8a"} Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.698400 4893 scope.go:117] "RemoveContainer" containerID="1e4767ebee22777de27138ca60b9eae40564baf1e72c94e0af43e00fb3dc7c2c" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.698408 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.715601 4893 scope.go:117] "RemoveContainer" containerID="1e4767ebee22777de27138ca60b9eae40564baf1e72c94e0af43e00fb3dc7c2c" Jan 28 15:08:07 crc kubenswrapper[4893]: E0128 15:08:07.716831 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e4767ebee22777de27138ca60b9eae40564baf1e72c94e0af43e00fb3dc7c2c\": container with ID starting with 1e4767ebee22777de27138ca60b9eae40564baf1e72c94e0af43e00fb3dc7c2c not found: ID does not exist" containerID="1e4767ebee22777de27138ca60b9eae40564baf1e72c94e0af43e00fb3dc7c2c" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.716868 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e4767ebee22777de27138ca60b9eae40564baf1e72c94e0af43e00fb3dc7c2c"} err="failed to get container status \"1e4767ebee22777de27138ca60b9eae40564baf1e72c94e0af43e00fb3dc7c2c\": rpc error: code = NotFound desc = could not find container \"1e4767ebee22777de27138ca60b9eae40564baf1e72c94e0af43e00fb3dc7c2c\": container with ID starting with 1e4767ebee22777de27138ca60b9eae40564baf1e72c94e0af43e00fb3dc7c2c not found: ID does not exist" Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.772939 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t"] Jan 28 15:08:07 crc kubenswrapper[4893]: I0128 15:08:07.776904 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67fbdd65b9-mtq9t"] Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.550203 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b4c755c99-nqcfp"] Jan 28 15:08:08 crc kubenswrapper[4893]: E0128 15:08:08.550460 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604cd931-84ed-4955-99a5-5a126c1f2950" containerName="controller-manager" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.550496 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="604cd931-84ed-4955-99a5-5a126c1f2950" containerName="controller-manager" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.550603 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="604cd931-84ed-4955-99a5-5a126c1f2950" containerName="controller-manager" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.551013 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.552815 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.554458 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.554659 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.557956 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.558271 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.562994 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.563297 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.565208 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-client-ca\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.565267 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-config\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.565316 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-proxy-ca-bundles\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.565378 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn9j6\" (UniqueName: \"kubernetes.io/projected/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-kube-api-access-mn9j6\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.565424 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-serving-cert\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.566419 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b4c755c99-nqcfp"] Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.666731 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-serving-cert\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.666872 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-client-ca\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.666920 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-config\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.666957 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-proxy-ca-bundles\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.667029 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn9j6\" (UniqueName: \"kubernetes.io/projected/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-kube-api-access-mn9j6\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.668875 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-proxy-ca-bundles\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.668901 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-client-ca\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.669032 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-config\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.673377 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-serving-cert\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.683396 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn9j6\" (UniqueName: \"kubernetes.io/projected/945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e-kube-api-access-mn9j6\") pod \"controller-manager-b4c755c99-nqcfp\" (UID: \"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e\") " pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.898984 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="604cd931-84ed-4955-99a5-5a126c1f2950" path="/var/lib/kubelet/pods/604cd931-84ed-4955-99a5-5a126c1f2950/volumes" Jan 28 15:08:08 crc kubenswrapper[4893]: I0128 15:08:08.906319 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:09 crc kubenswrapper[4893]: I0128 15:08:09.306024 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b4c755c99-nqcfp"] Jan 28 15:08:09 crc kubenswrapper[4893]: I0128 15:08:09.710531 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" event={"ID":"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e","Type":"ContainerStarted","Data":"2c8182b522adf32ef8720ff3f721d598f2428261df3b9aeb11b51df15886ade5"} Jan 28 15:08:09 crc kubenswrapper[4893]: I0128 15:08:09.710893 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" event={"ID":"945980a3-58b3-4db5-a4a2-d2c6a2d5bd0e","Type":"ContainerStarted","Data":"3385dba2c32132fbab1a5091d785c161d7493b25093ef554c71b1dd5e947ec59"} Jan 28 15:08:09 crc kubenswrapper[4893]: I0128 15:08:09.710919 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:09 crc kubenswrapper[4893]: I0128 15:08:09.718249 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" Jan 28 15:08:09 crc kubenswrapper[4893]: I0128 15:08:09.734285 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b4c755c99-nqcfp" podStartSLOduration=3.734267715 podStartE2EDuration="3.734267715s" podCreationTimestamp="2026-01-28 15:08:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:08:09.731369904 +0000 UTC m=+407.504984922" watchObservedRunningTime="2026-01-28 15:08:09.734267715 +0000 UTC m=+407.507882743" Jan 28 15:08:35 crc kubenswrapper[4893]: I0128 15:08:35.722092 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:08:35 crc kubenswrapper[4893]: I0128 15:08:35.722705 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:08:35 crc kubenswrapper[4893]: I0128 15:08:35.722751 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:08:35 crc kubenswrapper[4893]: I0128 15:08:35.723326 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"20d8eb6fb2ed649557150caacec59c356900810bc0df5c731a7427a65b6878f0"} pod="openshift-machine-config-operator/machine-config-daemon-l2nht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:08:35 crc kubenswrapper[4893]: I0128 15:08:35.723375 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" containerID="cri-o://20d8eb6fb2ed649557150caacec59c356900810bc0df5c731a7427a65b6878f0" gracePeriod=600 Jan 28 15:08:35 crc kubenswrapper[4893]: I0128 15:08:35.876529 4893 generic.go:334] "Generic (PLEG): container finished" podID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerID="20d8eb6fb2ed649557150caacec59c356900810bc0df5c731a7427a65b6878f0" exitCode=0 Jan 28 15:08:35 crc kubenswrapper[4893]: I0128 15:08:35.876587 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerDied","Data":"20d8eb6fb2ed649557150caacec59c356900810bc0df5c731a7427a65b6878f0"} Jan 28 15:08:35 crc kubenswrapper[4893]: I0128 15:08:35.876631 4893 scope.go:117] "RemoveContainer" containerID="d2675a60bf514654daf9316a8cd81d1d82b31c6618d85a5a577cfe44caebfa95" Jan 28 15:08:36 crc kubenswrapper[4893]: I0128 15:08:36.886658 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerStarted","Data":"194444ae046498e08f8f911d426d18ad3d7857b481964cff9f834815e3198cff"} Jan 28 15:08:53 crc kubenswrapper[4893]: I0128 15:08:53.884494 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zgw9r"] Jan 28 15:09:18 crc kubenswrapper[4893]: I0128 15:09:18.914551 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" podUID="17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" containerName="oauth-openshift" containerID="cri-o://c62f4360ba209d8a01f6d8298d74bf6bdd9c0b6cbfaeef5c17d0ba7b5e6a88bb" gracePeriod=15 Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.192193 4893 generic.go:334] "Generic (PLEG): container finished" podID="17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" containerID="c62f4360ba209d8a01f6d8298d74bf6bdd9c0b6cbfaeef5c17d0ba7b5e6a88bb" exitCode=0 Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.192314 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" event={"ID":"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e","Type":"ContainerDied","Data":"c62f4360ba209d8a01f6d8298d74bf6bdd9c0b6cbfaeef5c17d0ba7b5e6a88bb"} Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.320307 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.356908 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2"] Jan 28 15:09:19 crc kubenswrapper[4893]: E0128 15:09:19.357220 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" containerName="oauth-openshift" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.357245 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" containerName="oauth-openshift" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.357358 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" containerName="oauth-openshift" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.358007 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.372991 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2"] Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.421234 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-cliconfig\") pod \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.421309 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-router-certs\") pod \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.421355 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-login\") pod \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.421407 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-serving-cert\") pod \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.421446 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-session\") pod \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.421511 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-idp-0-file-data\") pod \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.421558 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-ocp-branding-template\") pod \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.421668 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-error\") pod \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.421725 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fmtw\" (UniqueName: \"kubernetes.io/projected/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-kube-api-access-5fmtw\") pod \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.421748 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-provider-selection\") pod \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.421773 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-trusted-ca-bundle\") pod \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.421811 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-audit-dir\") pod \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.421862 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-service-ca\") pod \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.421906 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-audit-policies\") pod \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\" (UID: \"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e\") " Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.422810 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" (UID: "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.422814 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" (UID: "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.422996 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" (UID: "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.423250 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" (UID: "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.423529 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" (UID: "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.432703 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" (UID: "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.432790 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-kube-api-access-5fmtw" (OuterVolumeSpecName: "kube-api-access-5fmtw") pod "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" (UID: "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e"). InnerVolumeSpecName "kube-api-access-5fmtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.432982 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" (UID: "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.433383 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" (UID: "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.433868 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" (UID: "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.434054 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" (UID: "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.434157 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" (UID: "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.434337 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" (UID: "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.434470 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" (UID: "17fc2b8f-01ad-426d-9dfa-4531ac3ff28e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.522995 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-router-certs\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523049 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523066 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-audit-dir\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523084 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-user-template-login\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523106 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-user-template-error\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523128 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523145 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-audit-policies\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523161 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-session\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523188 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-service-ca\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523204 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523223 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523243 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523315 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7whq\" (UniqueName: \"kubernetes.io/projected/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-kube-api-access-n7whq\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523367 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523420 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fmtw\" (UniqueName: \"kubernetes.io/projected/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-kube-api-access-5fmtw\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523432 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523445 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523455 4893 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523465 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523490 4893 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523500 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523518 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523528 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523539 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523547 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523556 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523565 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.523576 4893 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624146 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624227 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-router-certs\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624260 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624286 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-audit-dir\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624308 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-user-template-login\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624334 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-user-template-error\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624360 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624380 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-audit-policies\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624406 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-session\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624441 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-service-ca\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624461 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624510 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624537 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624570 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7whq\" (UniqueName: \"kubernetes.io/projected/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-kube-api-access-n7whq\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.624686 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-audit-dir\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.625905 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.625922 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-service-ca\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.626904 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-audit-policies\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.627698 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.628272 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-router-certs\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.629281 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.631789 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.631982 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-session\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.632041 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-user-template-error\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.634075 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-user-template-login\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.634649 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.635635 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.639768 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7whq\" (UniqueName: \"kubernetes.io/projected/76fb8402-a949-4c54-a1a2-1ee6fb7d39f9-kube-api-access-n7whq\") pod \"oauth-openshift-6dbf6c6b4f-258h2\" (UID: \"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9\") " pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:19 crc kubenswrapper[4893]: I0128 15:09:19.675836 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:20 crc kubenswrapper[4893]: I0128 15:09:20.074844 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2"] Jan 28 15:09:20 crc kubenswrapper[4893]: I0128 15:09:20.198869 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" event={"ID":"17fc2b8f-01ad-426d-9dfa-4531ac3ff28e","Type":"ContainerDied","Data":"9dc84e4bc4f50bc0c9bf471442861127c14f9d2270653181d414466feefd8f6e"} Jan 28 15:09:20 crc kubenswrapper[4893]: I0128 15:09:20.198890 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-zgw9r" Jan 28 15:09:20 crc kubenswrapper[4893]: I0128 15:09:20.198920 4893 scope.go:117] "RemoveContainer" containerID="c62f4360ba209d8a01f6d8298d74bf6bdd9c0b6cbfaeef5c17d0ba7b5e6a88bb" Jan 28 15:09:20 crc kubenswrapper[4893]: I0128 15:09:20.200023 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" event={"ID":"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9","Type":"ContainerStarted","Data":"8d8b65b1f74cdcf4d0e58dba8bbd373621ba2202ad94cabc9b8e11412b99f3f2"} Jan 28 15:09:20 crc kubenswrapper[4893]: I0128 15:09:20.241173 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zgw9r"] Jan 28 15:09:20 crc kubenswrapper[4893]: I0128 15:09:20.244485 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-zgw9r"] Jan 28 15:09:20 crc kubenswrapper[4893]: I0128 15:09:20.898086 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17fc2b8f-01ad-426d-9dfa-4531ac3ff28e" path="/var/lib/kubelet/pods/17fc2b8f-01ad-426d-9dfa-4531ac3ff28e/volumes" Jan 28 15:09:21 crc kubenswrapper[4893]: I0128 15:09:21.206677 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" event={"ID":"76fb8402-a949-4c54-a1a2-1ee6fb7d39f9","Type":"ContainerStarted","Data":"4007706eab157d9598a08cc13aebdc090d0162a705e48e778e8b4f26a5d978a1"} Jan 28 15:09:21 crc kubenswrapper[4893]: I0128 15:09:21.207253 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:21 crc kubenswrapper[4893]: I0128 15:09:21.214447 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" Jan 28 15:09:21 crc kubenswrapper[4893]: I0128 15:09:21.228290 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6dbf6c6b4f-258h2" podStartSLOduration=28.228270908 podStartE2EDuration="28.228270908s" podCreationTimestamp="2026-01-28 15:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:09:21.226451287 +0000 UTC m=+479.000066325" watchObservedRunningTime="2026-01-28 15:09:21.228270908 +0000 UTC m=+479.001885936" Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.632621 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-46wz5"] Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.633467 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-46wz5" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" containerName="registry-server" containerID="cri-o://c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8" gracePeriod=30 Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.646199 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5gtgr"] Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.646791 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5gtgr" podUID="43843abc-ea99-476a-81c0-76d6530f7c75" containerName="registry-server" containerID="cri-o://3196393a71fd4433f0a75e645bcee184cc0cfb262a7fffeb39aa66bb1a00dbbc" gracePeriod=30 Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.661806 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzfvj"] Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.662344 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" podUID="9a587792-e86e-434f-873e-c7ce3aac8bce" containerName="marketplace-operator" containerID="cri-o://2a4656bdd30cb8f1162410c9a24ad5ed87a5abd8d1bc59ab392abb0842545b50" gracePeriod=30 Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.670668 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g675f"] Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.671280 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g675f" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" containerName="registry-server" containerID="cri-o://78467fdbd2a193568e7e45add517b915c8ed8b5de5fd0590125fadb1d857c5a9" gracePeriod=30 Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.674903 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nwlnm"] Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.675341 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nwlnm" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" containerName="registry-server" containerID="cri-o://729023193e6b1bda6fc6d60539a8e4559bcc05e677a01e03d3cea81c63c009fb" gracePeriod=30 Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.684081 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-prr4s"] Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.684851 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.721263 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-prr4s"] Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.808817 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07fe07b9-ae23-4203-b85e-02462161f5b3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-prr4s\" (UID: \"07fe07b9-ae23-4203-b85e-02462161f5b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.808890 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/07fe07b9-ae23-4203-b85e-02462161f5b3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-prr4s\" (UID: \"07fe07b9-ae23-4203-b85e-02462161f5b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.808917 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzfld\" (UniqueName: \"kubernetes.io/projected/07fe07b9-ae23-4203-b85e-02462161f5b3-kube-api-access-mzfld\") pod \"marketplace-operator-79b997595-prr4s\" (UID: \"07fe07b9-ae23-4203-b85e-02462161f5b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.909806 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07fe07b9-ae23-4203-b85e-02462161f5b3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-prr4s\" (UID: \"07fe07b9-ae23-4203-b85e-02462161f5b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.909865 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/07fe07b9-ae23-4203-b85e-02462161f5b3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-prr4s\" (UID: \"07fe07b9-ae23-4203-b85e-02462161f5b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.909891 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzfld\" (UniqueName: \"kubernetes.io/projected/07fe07b9-ae23-4203-b85e-02462161f5b3-kube-api-access-mzfld\") pod \"marketplace-operator-79b997595-prr4s\" (UID: \"07fe07b9-ae23-4203-b85e-02462161f5b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.911241 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07fe07b9-ae23-4203-b85e-02462161f5b3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-prr4s\" (UID: \"07fe07b9-ae23-4203-b85e-02462161f5b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.917887 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/07fe07b9-ae23-4203-b85e-02462161f5b3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-prr4s\" (UID: \"07fe07b9-ae23-4203-b85e-02462161f5b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" Jan 28 15:09:47 crc kubenswrapper[4893]: I0128 15:09:47.931919 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzfld\" (UniqueName: \"kubernetes.io/projected/07fe07b9-ae23-4203-b85e-02462161f5b3-kube-api-access-mzfld\") pod \"marketplace-operator-79b997595-prr4s\" (UID: \"07fe07b9-ae23-4203-b85e-02462161f5b3\") " pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.005885 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.081413 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.100923 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8 is running failed: container process not found" containerID="c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.101340 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8 is running failed: container process not found" containerID="c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.101721 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8 is running failed: container process not found" containerID="c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.101767 4893 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-46wz5" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" containerName="registry-server" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.197695 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.198863 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.201430 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.204956 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.213172 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43843abc-ea99-476a-81c0-76d6530f7c75-utilities\") pod \"43843abc-ea99-476a-81c0-76d6530f7c75\" (UID: \"43843abc-ea99-476a-81c0-76d6530f7c75\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.213202 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43843abc-ea99-476a-81c0-76d6530f7c75-catalog-content\") pod \"43843abc-ea99-476a-81c0-76d6530f7c75\" (UID: \"43843abc-ea99-476a-81c0-76d6530f7c75\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.213284 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zdjz\" (UniqueName: \"kubernetes.io/projected/43843abc-ea99-476a-81c0-76d6530f7c75-kube-api-access-7zdjz\") pod \"43843abc-ea99-476a-81c0-76d6530f7c75\" (UID: \"43843abc-ea99-476a-81c0-76d6530f7c75\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.217442 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43843abc-ea99-476a-81c0-76d6530f7c75-utilities" (OuterVolumeSpecName: "utilities") pod "43843abc-ea99-476a-81c0-76d6530f7c75" (UID: "43843abc-ea99-476a-81c0-76d6530f7c75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.220869 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43843abc-ea99-476a-81c0-76d6530f7c75-kube-api-access-7zdjz" (OuterVolumeSpecName: "kube-api-access-7zdjz") pod "43843abc-ea99-476a-81c0-76d6530f7c75" (UID: "43843abc-ea99-476a-81c0-76d6530f7c75"). InnerVolumeSpecName "kube-api-access-7zdjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.309879 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43843abc-ea99-476a-81c0-76d6530f7c75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43843abc-ea99-476a-81c0-76d6530f7c75" (UID: "43843abc-ea99-476a-81c0-76d6530f7c75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.314926 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace4b0ad-d8d3-48aa-8635-6e6e96030672-utilities\") pod \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\" (UID: \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.314970 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh9zt\" (UniqueName: \"kubernetes.io/projected/ace4b0ad-d8d3-48aa-8635-6e6e96030672-kube-api-access-hh9zt\") pod \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\" (UID: \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.315000 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f2541b-4f69-4bbc-9388-c040e53d85a0-utilities\") pod \"94f2541b-4f69-4bbc-9388-c040e53d85a0\" (UID: \"94f2541b-4f69-4bbc-9388-c040e53d85a0\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.315017 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a587792-e86e-434f-873e-c7ce3aac8bce-marketplace-trusted-ca\") pod \"9a587792-e86e-434f-873e-c7ce3aac8bce\" (UID: \"9a587792-e86e-434f-873e-c7ce3aac8bce\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.315045 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcdwb\" (UniqueName: \"kubernetes.io/projected/9a587792-e86e-434f-873e-c7ce3aac8bce-kube-api-access-zcdwb\") pod \"9a587792-e86e-434f-873e-c7ce3aac8bce\" (UID: \"9a587792-e86e-434f-873e-c7ce3aac8bce\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.315062 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a587792-e86e-434f-873e-c7ce3aac8bce-marketplace-operator-metrics\") pod \"9a587792-e86e-434f-873e-c7ce3aac8bce\" (UID: \"9a587792-e86e-434f-873e-c7ce3aac8bce\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.315085 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-catalog-content\") pod \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\" (UID: \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.315106 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace4b0ad-d8d3-48aa-8635-6e6e96030672-catalog-content\") pod \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\" (UID: \"ace4b0ad-d8d3-48aa-8635-6e6e96030672\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.315140 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8vtw\" (UniqueName: \"kubernetes.io/projected/94f2541b-4f69-4bbc-9388-c040e53d85a0-kube-api-access-c8vtw\") pod \"94f2541b-4f69-4bbc-9388-c040e53d85a0\" (UID: \"94f2541b-4f69-4bbc-9388-c040e53d85a0\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.315160 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-utilities\") pod \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\" (UID: \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.315185 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f2541b-4f69-4bbc-9388-c040e53d85a0-catalog-content\") pod \"94f2541b-4f69-4bbc-9388-c040e53d85a0\" (UID: \"94f2541b-4f69-4bbc-9388-c040e53d85a0\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.315201 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p26nl\" (UniqueName: \"kubernetes.io/projected/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-kube-api-access-p26nl\") pod \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\" (UID: \"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2\") " Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.315482 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43843abc-ea99-476a-81c0-76d6530f7c75-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.315498 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43843abc-ea99-476a-81c0-76d6530f7c75-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.315510 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zdjz\" (UniqueName: \"kubernetes.io/projected/43843abc-ea99-476a-81c0-76d6530f7c75-kube-api-access-7zdjz\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.316726 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a587792-e86e-434f-873e-c7ce3aac8bce-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "9a587792-e86e-434f-873e-c7ce3aac8bce" (UID: "9a587792-e86e-434f-873e-c7ce3aac8bce"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.317352 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94f2541b-4f69-4bbc-9388-c040e53d85a0-utilities" (OuterVolumeSpecName: "utilities") pod "94f2541b-4f69-4bbc-9388-c040e53d85a0" (UID: "94f2541b-4f69-4bbc-9388-c040e53d85a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.318513 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-utilities" (OuterVolumeSpecName: "utilities") pod "f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" (UID: "f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.318625 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a587792-e86e-434f-873e-c7ce3aac8bce-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "9a587792-e86e-434f-873e-c7ce3aac8bce" (UID: "9a587792-e86e-434f-873e-c7ce3aac8bce"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.321672 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94f2541b-4f69-4bbc-9388-c040e53d85a0-kube-api-access-c8vtw" (OuterVolumeSpecName: "kube-api-access-c8vtw") pod "94f2541b-4f69-4bbc-9388-c040e53d85a0" (UID: "94f2541b-4f69-4bbc-9388-c040e53d85a0"). InnerVolumeSpecName "kube-api-access-c8vtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.321697 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-kube-api-access-p26nl" (OuterVolumeSpecName: "kube-api-access-p26nl") pod "f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" (UID: "f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2"). InnerVolumeSpecName "kube-api-access-p26nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.321972 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace4b0ad-d8d3-48aa-8635-6e6e96030672-utilities" (OuterVolumeSpecName: "utilities") pod "ace4b0ad-d8d3-48aa-8635-6e6e96030672" (UID: "ace4b0ad-d8d3-48aa-8635-6e6e96030672"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.322122 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a587792-e86e-434f-873e-c7ce3aac8bce-kube-api-access-zcdwb" (OuterVolumeSpecName: "kube-api-access-zcdwb") pod "9a587792-e86e-434f-873e-c7ce3aac8bce" (UID: "9a587792-e86e-434f-873e-c7ce3aac8bce"). InnerVolumeSpecName "kube-api-access-zcdwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.328766 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace4b0ad-d8d3-48aa-8635-6e6e96030672-kube-api-access-hh9zt" (OuterVolumeSpecName: "kube-api-access-hh9zt") pod "ace4b0ad-d8d3-48aa-8635-6e6e96030672" (UID: "ace4b0ad-d8d3-48aa-8635-6e6e96030672"). InnerVolumeSpecName "kube-api-access-hh9zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.347434 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94f2541b-4f69-4bbc-9388-c040e53d85a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94f2541b-4f69-4bbc-9388-c040e53d85a0" (UID: "94f2541b-4f69-4bbc-9388-c040e53d85a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.377441 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ace4b0ad-d8d3-48aa-8635-6e6e96030672-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ace4b0ad-d8d3-48aa-8635-6e6e96030672" (UID: "ace4b0ad-d8d3-48aa-8635-6e6e96030672"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.381490 4893 generic.go:334] "Generic (PLEG): container finished" podID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" containerID="729023193e6b1bda6fc6d60539a8e4559bcc05e677a01e03d3cea81c63c009fb" exitCode=0 Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.381549 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwlnm" event={"ID":"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2","Type":"ContainerDied","Data":"729023193e6b1bda6fc6d60539a8e4559bcc05e677a01e03d3cea81c63c009fb"} Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.381575 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwlnm" event={"ID":"f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2","Type":"ContainerDied","Data":"a5ae9372da2036ec65fe74526c8ad1dceb2814422cb946d02303d9939b438f10"} Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.381591 4893 scope.go:117] "RemoveContainer" containerID="729023193e6b1bda6fc6d60539a8e4559bcc05e677a01e03d3cea81c63c009fb" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.381687 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nwlnm" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.385246 4893 generic.go:334] "Generic (PLEG): container finished" podID="94f2541b-4f69-4bbc-9388-c040e53d85a0" containerID="78467fdbd2a193568e7e45add517b915c8ed8b5de5fd0590125fadb1d857c5a9" exitCode=0 Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.385291 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g675f" event={"ID":"94f2541b-4f69-4bbc-9388-c040e53d85a0","Type":"ContainerDied","Data":"78467fdbd2a193568e7e45add517b915c8ed8b5de5fd0590125fadb1d857c5a9"} Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.385309 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g675f" event={"ID":"94f2541b-4f69-4bbc-9388-c040e53d85a0","Type":"ContainerDied","Data":"5750af12fae08979bfaa99d6dd8251b234c145ee1445a7389126507fd1ae0aeb"} Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.385353 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g675f" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.394610 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46wz5" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.394672 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46wz5" event={"ID":"ace4b0ad-d8d3-48aa-8635-6e6e96030672","Type":"ContainerDied","Data":"c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8"} Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.398083 4893 generic.go:334] "Generic (PLEG): container finished" podID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" containerID="c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8" exitCode=0 Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.398496 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46wz5" event={"ID":"ace4b0ad-d8d3-48aa-8635-6e6e96030672","Type":"ContainerDied","Data":"4f5bec911cd7e3607988a942da1f4aff96577b0cdbb4bdf24a23c17ce4e054e2"} Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.400988 4893 generic.go:334] "Generic (PLEG): container finished" podID="43843abc-ea99-476a-81c0-76d6530f7c75" containerID="3196393a71fd4433f0a75e645bcee184cc0cfb262a7fffeb39aa66bb1a00dbbc" exitCode=0 Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.401060 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gtgr" event={"ID":"43843abc-ea99-476a-81c0-76d6530f7c75","Type":"ContainerDied","Data":"3196393a71fd4433f0a75e645bcee184cc0cfb262a7fffeb39aa66bb1a00dbbc"} Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.401086 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gtgr" event={"ID":"43843abc-ea99-476a-81c0-76d6530f7c75","Type":"ContainerDied","Data":"cb82e0bcca4a4bbc800edf029648f91c6ab03fa68bc631ff1b6abb90cde31028"} Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.401301 4893 scope.go:117] "RemoveContainer" containerID="cf99786428fb4340e15a1b7e27396705ce11c36a5509e9d98b7c8b11dc84fe64" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.401356 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5gtgr" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.405514 4893 generic.go:334] "Generic (PLEG): container finished" podID="9a587792-e86e-434f-873e-c7ce3aac8bce" containerID="2a4656bdd30cb8f1162410c9a24ad5ed87a5abd8d1bc59ab392abb0842545b50" exitCode=0 Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.405667 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" event={"ID":"9a587792-e86e-434f-873e-c7ce3aac8bce","Type":"ContainerDied","Data":"2a4656bdd30cb8f1162410c9a24ad5ed87a5abd8d1bc59ab392abb0842545b50"} Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.405828 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" event={"ID":"9a587792-e86e-434f-873e-c7ce3aac8bce","Type":"ContainerDied","Data":"b3d70c59917687379961107605763934b1bb3e879ddf786258ad29c437713686"} Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.405702 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fzfvj" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.425359 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh9zt\" (UniqueName: \"kubernetes.io/projected/ace4b0ad-d8d3-48aa-8635-6e6e96030672-kube-api-access-hh9zt\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.425413 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94f2541b-4f69-4bbc-9388-c040e53d85a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.425595 4893 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a587792-e86e-434f-873e-c7ce3aac8bce-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.425609 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcdwb\" (UniqueName: \"kubernetes.io/projected/9a587792-e86e-434f-873e-c7ce3aac8bce-kube-api-access-zcdwb\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.425622 4893 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a587792-e86e-434f-873e-c7ce3aac8bce-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.425791 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ace4b0ad-d8d3-48aa-8635-6e6e96030672-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.425804 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8vtw\" (UniqueName: \"kubernetes.io/projected/94f2541b-4f69-4bbc-9388-c040e53d85a0-kube-api-access-c8vtw\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.425816 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.425917 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94f2541b-4f69-4bbc-9388-c040e53d85a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.425935 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p26nl\" (UniqueName: \"kubernetes.io/projected/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-kube-api-access-p26nl\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.425948 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ace4b0ad-d8d3-48aa-8635-6e6e96030672-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.440001 4893 scope.go:117] "RemoveContainer" containerID="634c36920bba0bb3ce14092e60fa8095188f989249b66dc959105714681bc709" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.445099 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g675f"] Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.456204 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g675f"] Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.461394 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-46wz5"] Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.465467 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-46wz5"] Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.473277 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzfvj"] Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.478633 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzfvj"] Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.483513 4893 scope.go:117] "RemoveContainer" containerID="729023193e6b1bda6fc6d60539a8e4559bcc05e677a01e03d3cea81c63c009fb" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.483536 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" (UID: "f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.484024 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"729023193e6b1bda6fc6d60539a8e4559bcc05e677a01e03d3cea81c63c009fb\": container with ID starting with 729023193e6b1bda6fc6d60539a8e4559bcc05e677a01e03d3cea81c63c009fb not found: ID does not exist" containerID="729023193e6b1bda6fc6d60539a8e4559bcc05e677a01e03d3cea81c63c009fb" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.484072 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"729023193e6b1bda6fc6d60539a8e4559bcc05e677a01e03d3cea81c63c009fb"} err="failed to get container status \"729023193e6b1bda6fc6d60539a8e4559bcc05e677a01e03d3cea81c63c009fb\": rpc error: code = NotFound desc = could not find container \"729023193e6b1bda6fc6d60539a8e4559bcc05e677a01e03d3cea81c63c009fb\": container with ID starting with 729023193e6b1bda6fc6d60539a8e4559bcc05e677a01e03d3cea81c63c009fb not found: ID does not exist" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.484098 4893 scope.go:117] "RemoveContainer" containerID="cf99786428fb4340e15a1b7e27396705ce11c36a5509e9d98b7c8b11dc84fe64" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.484417 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf99786428fb4340e15a1b7e27396705ce11c36a5509e9d98b7c8b11dc84fe64\": container with ID starting with cf99786428fb4340e15a1b7e27396705ce11c36a5509e9d98b7c8b11dc84fe64 not found: ID does not exist" containerID="cf99786428fb4340e15a1b7e27396705ce11c36a5509e9d98b7c8b11dc84fe64" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.484448 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf99786428fb4340e15a1b7e27396705ce11c36a5509e9d98b7c8b11dc84fe64"} err="failed to get container status \"cf99786428fb4340e15a1b7e27396705ce11c36a5509e9d98b7c8b11dc84fe64\": rpc error: code = NotFound desc = could not find container \"cf99786428fb4340e15a1b7e27396705ce11c36a5509e9d98b7c8b11dc84fe64\": container with ID starting with cf99786428fb4340e15a1b7e27396705ce11c36a5509e9d98b7c8b11dc84fe64 not found: ID does not exist" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.484505 4893 scope.go:117] "RemoveContainer" containerID="634c36920bba0bb3ce14092e60fa8095188f989249b66dc959105714681bc709" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.484726 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"634c36920bba0bb3ce14092e60fa8095188f989249b66dc959105714681bc709\": container with ID starting with 634c36920bba0bb3ce14092e60fa8095188f989249b66dc959105714681bc709 not found: ID does not exist" containerID="634c36920bba0bb3ce14092e60fa8095188f989249b66dc959105714681bc709" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.484753 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"634c36920bba0bb3ce14092e60fa8095188f989249b66dc959105714681bc709"} err="failed to get container status \"634c36920bba0bb3ce14092e60fa8095188f989249b66dc959105714681bc709\": rpc error: code = NotFound desc = could not find container \"634c36920bba0bb3ce14092e60fa8095188f989249b66dc959105714681bc709\": container with ID starting with 634c36920bba0bb3ce14092e60fa8095188f989249b66dc959105714681bc709 not found: ID does not exist" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.484766 4893 scope.go:117] "RemoveContainer" containerID="78467fdbd2a193568e7e45add517b915c8ed8b5de5fd0590125fadb1d857c5a9" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.487192 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5gtgr"] Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.490589 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5gtgr"] Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.498848 4893 scope.go:117] "RemoveContainer" containerID="fc19aea9d4aaf64072a93bc9f9ebcc442e0b07df4d0aace317583e482e4b03ae" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.512950 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-prr4s"] Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.518907 4893 scope.go:117] "RemoveContainer" containerID="828b24fe12f1c2287840e215bb42155bae588e19c90ecff5f57058c760f49c6d" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.526769 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.531737 4893 scope.go:117] "RemoveContainer" containerID="78467fdbd2a193568e7e45add517b915c8ed8b5de5fd0590125fadb1d857c5a9" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.532833 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78467fdbd2a193568e7e45add517b915c8ed8b5de5fd0590125fadb1d857c5a9\": container with ID starting with 78467fdbd2a193568e7e45add517b915c8ed8b5de5fd0590125fadb1d857c5a9 not found: ID does not exist" containerID="78467fdbd2a193568e7e45add517b915c8ed8b5de5fd0590125fadb1d857c5a9" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.532864 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78467fdbd2a193568e7e45add517b915c8ed8b5de5fd0590125fadb1d857c5a9"} err="failed to get container status \"78467fdbd2a193568e7e45add517b915c8ed8b5de5fd0590125fadb1d857c5a9\": rpc error: code = NotFound desc = could not find container \"78467fdbd2a193568e7e45add517b915c8ed8b5de5fd0590125fadb1d857c5a9\": container with ID starting with 78467fdbd2a193568e7e45add517b915c8ed8b5de5fd0590125fadb1d857c5a9 not found: ID does not exist" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.532886 4893 scope.go:117] "RemoveContainer" containerID="fc19aea9d4aaf64072a93bc9f9ebcc442e0b07df4d0aace317583e482e4b03ae" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.533073 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc19aea9d4aaf64072a93bc9f9ebcc442e0b07df4d0aace317583e482e4b03ae\": container with ID starting with fc19aea9d4aaf64072a93bc9f9ebcc442e0b07df4d0aace317583e482e4b03ae not found: ID does not exist" containerID="fc19aea9d4aaf64072a93bc9f9ebcc442e0b07df4d0aace317583e482e4b03ae" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.533098 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc19aea9d4aaf64072a93bc9f9ebcc442e0b07df4d0aace317583e482e4b03ae"} err="failed to get container status \"fc19aea9d4aaf64072a93bc9f9ebcc442e0b07df4d0aace317583e482e4b03ae\": rpc error: code = NotFound desc = could not find container \"fc19aea9d4aaf64072a93bc9f9ebcc442e0b07df4d0aace317583e482e4b03ae\": container with ID starting with fc19aea9d4aaf64072a93bc9f9ebcc442e0b07df4d0aace317583e482e4b03ae not found: ID does not exist" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.533112 4893 scope.go:117] "RemoveContainer" containerID="828b24fe12f1c2287840e215bb42155bae588e19c90ecff5f57058c760f49c6d" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.533323 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"828b24fe12f1c2287840e215bb42155bae588e19c90ecff5f57058c760f49c6d\": container with ID starting with 828b24fe12f1c2287840e215bb42155bae588e19c90ecff5f57058c760f49c6d not found: ID does not exist" containerID="828b24fe12f1c2287840e215bb42155bae588e19c90ecff5f57058c760f49c6d" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.533344 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828b24fe12f1c2287840e215bb42155bae588e19c90ecff5f57058c760f49c6d"} err="failed to get container status \"828b24fe12f1c2287840e215bb42155bae588e19c90ecff5f57058c760f49c6d\": rpc error: code = NotFound desc = could not find container \"828b24fe12f1c2287840e215bb42155bae588e19c90ecff5f57058c760f49c6d\": container with ID starting with 828b24fe12f1c2287840e215bb42155bae588e19c90ecff5f57058c760f49c6d not found: ID does not exist" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.533357 4893 scope.go:117] "RemoveContainer" containerID="c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.546309 4893 scope.go:117] "RemoveContainer" containerID="4b8937070b2652128903b3264abad9f3199c4f3176fad1653a47d3898492c053" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.563106 4893 scope.go:117] "RemoveContainer" containerID="3d2641441d869cd8fe41b19194f6959a91453747b9b29287265990911342caa0" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.584893 4893 scope.go:117] "RemoveContainer" containerID="c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.585489 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8\": container with ID starting with c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8 not found: ID does not exist" containerID="c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.585545 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8"} err="failed to get container status \"c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8\": rpc error: code = NotFound desc = could not find container \"c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8\": container with ID starting with c8903d241b056aba7483e427b14f484ac5b6792c53ac188643254d9fb98566a8 not found: ID does not exist" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.585583 4893 scope.go:117] "RemoveContainer" containerID="4b8937070b2652128903b3264abad9f3199c4f3176fad1653a47d3898492c053" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.585924 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b8937070b2652128903b3264abad9f3199c4f3176fad1653a47d3898492c053\": container with ID starting with 4b8937070b2652128903b3264abad9f3199c4f3176fad1653a47d3898492c053 not found: ID does not exist" containerID="4b8937070b2652128903b3264abad9f3199c4f3176fad1653a47d3898492c053" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.585982 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b8937070b2652128903b3264abad9f3199c4f3176fad1653a47d3898492c053"} err="failed to get container status \"4b8937070b2652128903b3264abad9f3199c4f3176fad1653a47d3898492c053\": rpc error: code = NotFound desc = could not find container \"4b8937070b2652128903b3264abad9f3199c4f3176fad1653a47d3898492c053\": container with ID starting with 4b8937070b2652128903b3264abad9f3199c4f3176fad1653a47d3898492c053 not found: ID does not exist" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.586013 4893 scope.go:117] "RemoveContainer" containerID="3d2641441d869cd8fe41b19194f6959a91453747b9b29287265990911342caa0" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.586434 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2641441d869cd8fe41b19194f6959a91453747b9b29287265990911342caa0\": container with ID starting with 3d2641441d869cd8fe41b19194f6959a91453747b9b29287265990911342caa0 not found: ID does not exist" containerID="3d2641441d869cd8fe41b19194f6959a91453747b9b29287265990911342caa0" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.586499 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2641441d869cd8fe41b19194f6959a91453747b9b29287265990911342caa0"} err="failed to get container status \"3d2641441d869cd8fe41b19194f6959a91453747b9b29287265990911342caa0\": rpc error: code = NotFound desc = could not find container \"3d2641441d869cd8fe41b19194f6959a91453747b9b29287265990911342caa0\": container with ID starting with 3d2641441d869cd8fe41b19194f6959a91453747b9b29287265990911342caa0 not found: ID does not exist" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.586531 4893 scope.go:117] "RemoveContainer" containerID="3196393a71fd4433f0a75e645bcee184cc0cfb262a7fffeb39aa66bb1a00dbbc" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.619810 4893 scope.go:117] "RemoveContainer" containerID="3d4a9aca369ea84e9c9cd79125c3f05ae4fa265351806d8203db37dc6c33ee55" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.633629 4893 scope.go:117] "RemoveContainer" containerID="833ad2f85b41d5b7ce33f205d41255823e4bb686a24e9fecbd675270ab27fc99" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.646314 4893 scope.go:117] "RemoveContainer" containerID="3196393a71fd4433f0a75e645bcee184cc0cfb262a7fffeb39aa66bb1a00dbbc" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.646767 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3196393a71fd4433f0a75e645bcee184cc0cfb262a7fffeb39aa66bb1a00dbbc\": container with ID starting with 3196393a71fd4433f0a75e645bcee184cc0cfb262a7fffeb39aa66bb1a00dbbc not found: ID does not exist" containerID="3196393a71fd4433f0a75e645bcee184cc0cfb262a7fffeb39aa66bb1a00dbbc" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.646800 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3196393a71fd4433f0a75e645bcee184cc0cfb262a7fffeb39aa66bb1a00dbbc"} err="failed to get container status \"3196393a71fd4433f0a75e645bcee184cc0cfb262a7fffeb39aa66bb1a00dbbc\": rpc error: code = NotFound desc = could not find container \"3196393a71fd4433f0a75e645bcee184cc0cfb262a7fffeb39aa66bb1a00dbbc\": container with ID starting with 3196393a71fd4433f0a75e645bcee184cc0cfb262a7fffeb39aa66bb1a00dbbc not found: ID does not exist" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.646821 4893 scope.go:117] "RemoveContainer" containerID="3d4a9aca369ea84e9c9cd79125c3f05ae4fa265351806d8203db37dc6c33ee55" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.647160 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d4a9aca369ea84e9c9cd79125c3f05ae4fa265351806d8203db37dc6c33ee55\": container with ID starting with 3d4a9aca369ea84e9c9cd79125c3f05ae4fa265351806d8203db37dc6c33ee55 not found: ID does not exist" containerID="3d4a9aca369ea84e9c9cd79125c3f05ae4fa265351806d8203db37dc6c33ee55" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.647199 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d4a9aca369ea84e9c9cd79125c3f05ae4fa265351806d8203db37dc6c33ee55"} err="failed to get container status \"3d4a9aca369ea84e9c9cd79125c3f05ae4fa265351806d8203db37dc6c33ee55\": rpc error: code = NotFound desc = could not find container \"3d4a9aca369ea84e9c9cd79125c3f05ae4fa265351806d8203db37dc6c33ee55\": container with ID starting with 3d4a9aca369ea84e9c9cd79125c3f05ae4fa265351806d8203db37dc6c33ee55 not found: ID does not exist" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.647213 4893 scope.go:117] "RemoveContainer" containerID="833ad2f85b41d5b7ce33f205d41255823e4bb686a24e9fecbd675270ab27fc99" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.647681 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"833ad2f85b41d5b7ce33f205d41255823e4bb686a24e9fecbd675270ab27fc99\": container with ID starting with 833ad2f85b41d5b7ce33f205d41255823e4bb686a24e9fecbd675270ab27fc99 not found: ID does not exist" containerID="833ad2f85b41d5b7ce33f205d41255823e4bb686a24e9fecbd675270ab27fc99" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.647702 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833ad2f85b41d5b7ce33f205d41255823e4bb686a24e9fecbd675270ab27fc99"} err="failed to get container status \"833ad2f85b41d5b7ce33f205d41255823e4bb686a24e9fecbd675270ab27fc99\": rpc error: code = NotFound desc = could not find container \"833ad2f85b41d5b7ce33f205d41255823e4bb686a24e9fecbd675270ab27fc99\": container with ID starting with 833ad2f85b41d5b7ce33f205d41255823e4bb686a24e9fecbd675270ab27fc99 not found: ID does not exist" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.647737 4893 scope.go:117] "RemoveContainer" containerID="2a4656bdd30cb8f1162410c9a24ad5ed87a5abd8d1bc59ab392abb0842545b50" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.659407 4893 scope.go:117] "RemoveContainer" containerID="5f7dbf0ce267fc9b6893df92fe6adfff76f434e191e61748f30e887f981629b4" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.675370 4893 scope.go:117] "RemoveContainer" containerID="2a4656bdd30cb8f1162410c9a24ad5ed87a5abd8d1bc59ab392abb0842545b50" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.675832 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a4656bdd30cb8f1162410c9a24ad5ed87a5abd8d1bc59ab392abb0842545b50\": container with ID starting with 2a4656bdd30cb8f1162410c9a24ad5ed87a5abd8d1bc59ab392abb0842545b50 not found: ID does not exist" containerID="2a4656bdd30cb8f1162410c9a24ad5ed87a5abd8d1bc59ab392abb0842545b50" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.675868 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a4656bdd30cb8f1162410c9a24ad5ed87a5abd8d1bc59ab392abb0842545b50"} err="failed to get container status \"2a4656bdd30cb8f1162410c9a24ad5ed87a5abd8d1bc59ab392abb0842545b50\": rpc error: code = NotFound desc = could not find container \"2a4656bdd30cb8f1162410c9a24ad5ed87a5abd8d1bc59ab392abb0842545b50\": container with ID starting with 2a4656bdd30cb8f1162410c9a24ad5ed87a5abd8d1bc59ab392abb0842545b50 not found: ID does not exist" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.675896 4893 scope.go:117] "RemoveContainer" containerID="5f7dbf0ce267fc9b6893df92fe6adfff76f434e191e61748f30e887f981629b4" Jan 28 15:09:48 crc kubenswrapper[4893]: E0128 15:09:48.676291 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f7dbf0ce267fc9b6893df92fe6adfff76f434e191e61748f30e887f981629b4\": container with ID starting with 5f7dbf0ce267fc9b6893df92fe6adfff76f434e191e61748f30e887f981629b4 not found: ID does not exist" containerID="5f7dbf0ce267fc9b6893df92fe6adfff76f434e191e61748f30e887f981629b4" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.676320 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f7dbf0ce267fc9b6893df92fe6adfff76f434e191e61748f30e887f981629b4"} err="failed to get container status \"5f7dbf0ce267fc9b6893df92fe6adfff76f434e191e61748f30e887f981629b4\": rpc error: code = NotFound desc = could not find container \"5f7dbf0ce267fc9b6893df92fe6adfff76f434e191e61748f30e887f981629b4\": container with ID starting with 5f7dbf0ce267fc9b6893df92fe6adfff76f434e191e61748f30e887f981629b4 not found: ID does not exist" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.709593 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nwlnm"] Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.714759 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nwlnm"] Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.898440 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43843abc-ea99-476a-81c0-76d6530f7c75" path="/var/lib/kubelet/pods/43843abc-ea99-476a-81c0-76d6530f7c75/volumes" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.899282 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" path="/var/lib/kubelet/pods/94f2541b-4f69-4bbc-9388-c040e53d85a0/volumes" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.899882 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a587792-e86e-434f-873e-c7ce3aac8bce" path="/var/lib/kubelet/pods/9a587792-e86e-434f-873e-c7ce3aac8bce/volumes" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.900741 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" path="/var/lib/kubelet/pods/ace4b0ad-d8d3-48aa-8635-6e6e96030672/volumes" Jan 28 15:09:48 crc kubenswrapper[4893]: I0128 15:09:48.901285 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" path="/var/lib/kubelet/pods/f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2/volumes" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.414124 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" event={"ID":"07fe07b9-ae23-4203-b85e-02462161f5b3","Type":"ContainerStarted","Data":"c69543910a496b32682ce09248a123f4a8d248dbdfc7ff57f8e92426885c1ffb"} Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.414437 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" event={"ID":"07fe07b9-ae23-4203-b85e-02462161f5b3","Type":"ContainerStarted","Data":"ae1f16b40e8477a063e16997ddff26d3da0aeac314c6b90ccbf4c2f0bb6e2a89"} Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.414855 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.417767 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.430738 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-prr4s" podStartSLOduration=2.430715427 podStartE2EDuration="2.430715427s" podCreationTimestamp="2026-01-28 15:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:09:49.428608279 +0000 UTC m=+507.202223337" watchObservedRunningTime="2026-01-28 15:09:49.430715427 +0000 UTC m=+507.204330455" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.856825 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xlssl"] Jan 28 15:09:49 crc kubenswrapper[4893]: E0128 15:09:49.857101 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" containerName="extract-utilities" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857113 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" containerName="extract-utilities" Jan 28 15:09:49 crc kubenswrapper[4893]: E0128 15:09:49.857126 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" containerName="registry-server" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857133 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" containerName="registry-server" Jan 28 15:09:49 crc kubenswrapper[4893]: E0128 15:09:49.857142 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" containerName="extract-utilities" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857148 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" containerName="extract-utilities" Jan 28 15:09:49 crc kubenswrapper[4893]: E0128 15:09:49.857175 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" containerName="extract-content" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857181 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" containerName="extract-content" Jan 28 15:09:49 crc kubenswrapper[4893]: E0128 15:09:49.857190 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" containerName="registry-server" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857196 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" containerName="registry-server" Jan 28 15:09:49 crc kubenswrapper[4893]: E0128 15:09:49.857207 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a587792-e86e-434f-873e-c7ce3aac8bce" containerName="marketplace-operator" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857213 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a587792-e86e-434f-873e-c7ce3aac8bce" containerName="marketplace-operator" Jan 28 15:09:49 crc kubenswrapper[4893]: E0128 15:09:49.857222 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" containerName="registry-server" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857243 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" containerName="registry-server" Jan 28 15:09:49 crc kubenswrapper[4893]: E0128 15:09:49.857255 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" containerName="extract-content" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857261 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" containerName="extract-content" Jan 28 15:09:49 crc kubenswrapper[4893]: E0128 15:09:49.857270 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43843abc-ea99-476a-81c0-76d6530f7c75" containerName="extract-content" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857277 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="43843abc-ea99-476a-81c0-76d6530f7c75" containerName="extract-content" Jan 28 15:09:49 crc kubenswrapper[4893]: E0128 15:09:49.857286 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" containerName="extract-utilities" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857291 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" containerName="extract-utilities" Jan 28 15:09:49 crc kubenswrapper[4893]: E0128 15:09:49.857299 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43843abc-ea99-476a-81c0-76d6530f7c75" containerName="extract-utilities" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857304 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="43843abc-ea99-476a-81c0-76d6530f7c75" containerName="extract-utilities" Jan 28 15:09:49 crc kubenswrapper[4893]: E0128 15:09:49.857334 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43843abc-ea99-476a-81c0-76d6530f7c75" containerName="registry-server" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857342 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="43843abc-ea99-476a-81c0-76d6530f7c75" containerName="registry-server" Jan 28 15:09:49 crc kubenswrapper[4893]: E0128 15:09:49.857355 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" containerName="extract-content" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857361 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" containerName="extract-content" Jan 28 15:09:49 crc kubenswrapper[4893]: E0128 15:09:49.857372 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a587792-e86e-434f-873e-c7ce3aac8bce" containerName="marketplace-operator" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857379 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a587792-e86e-434f-873e-c7ce3aac8bce" containerName="marketplace-operator" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857512 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="43843abc-ea99-476a-81c0-76d6530f7c75" containerName="registry-server" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857523 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f69fe16f-cdc0-4aa4-aec1-2dd915eed2d2" containerName="registry-server" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857532 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a587792-e86e-434f-873e-c7ce3aac8bce" containerName="marketplace-operator" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857543 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace4b0ad-d8d3-48aa-8635-6e6e96030672" containerName="registry-server" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857573 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="94f2541b-4f69-4bbc-9388-c040e53d85a0" containerName="registry-server" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.857786 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a587792-e86e-434f-873e-c7ce3aac8bce" containerName="marketplace-operator" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.858387 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.860728 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:09:49 crc kubenswrapper[4893]: I0128 15:09:49.867161 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlssl"] Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.046813 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5sss\" (UniqueName: \"kubernetes.io/projected/fc249fd3-d895-44db-8a63-38334231d809-kube-api-access-w5sss\") pod \"redhat-marketplace-xlssl\" (UID: \"fc249fd3-d895-44db-8a63-38334231d809\") " pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.046907 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc249fd3-d895-44db-8a63-38334231d809-utilities\") pod \"redhat-marketplace-xlssl\" (UID: \"fc249fd3-d895-44db-8a63-38334231d809\") " pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.047024 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc249fd3-d895-44db-8a63-38334231d809-catalog-content\") pod \"redhat-marketplace-xlssl\" (UID: \"fc249fd3-d895-44db-8a63-38334231d809\") " pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.050111 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b7fw8"] Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.052463 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.054263 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.059230 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7fw8"] Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.148181 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8p8s\" (UniqueName: \"kubernetes.io/projected/c21ef389-3376-4802-93c1-3115af586c8b-kube-api-access-x8p8s\") pod \"redhat-operators-b7fw8\" (UID: \"c21ef389-3376-4802-93c1-3115af586c8b\") " pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.148262 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc249fd3-d895-44db-8a63-38334231d809-catalog-content\") pod \"redhat-marketplace-xlssl\" (UID: \"fc249fd3-d895-44db-8a63-38334231d809\") " pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.148413 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21ef389-3376-4802-93c1-3115af586c8b-utilities\") pod \"redhat-operators-b7fw8\" (UID: \"c21ef389-3376-4802-93c1-3115af586c8b\") " pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.148468 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5sss\" (UniqueName: \"kubernetes.io/projected/fc249fd3-d895-44db-8a63-38334231d809-kube-api-access-w5sss\") pod \"redhat-marketplace-xlssl\" (UID: \"fc249fd3-d895-44db-8a63-38334231d809\") " pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.148532 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc249fd3-d895-44db-8a63-38334231d809-utilities\") pod \"redhat-marketplace-xlssl\" (UID: \"fc249fd3-d895-44db-8a63-38334231d809\") " pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.148572 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21ef389-3376-4802-93c1-3115af586c8b-catalog-content\") pod \"redhat-operators-b7fw8\" (UID: \"c21ef389-3376-4802-93c1-3115af586c8b\") " pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.148873 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc249fd3-d895-44db-8a63-38334231d809-catalog-content\") pod \"redhat-marketplace-xlssl\" (UID: \"fc249fd3-d895-44db-8a63-38334231d809\") " pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.148926 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc249fd3-d895-44db-8a63-38334231d809-utilities\") pod \"redhat-marketplace-xlssl\" (UID: \"fc249fd3-d895-44db-8a63-38334231d809\") " pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.175619 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5sss\" (UniqueName: \"kubernetes.io/projected/fc249fd3-d895-44db-8a63-38334231d809-kube-api-access-w5sss\") pod \"redhat-marketplace-xlssl\" (UID: \"fc249fd3-d895-44db-8a63-38334231d809\") " pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.225341 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.252263 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21ef389-3376-4802-93c1-3115af586c8b-utilities\") pod \"redhat-operators-b7fw8\" (UID: \"c21ef389-3376-4802-93c1-3115af586c8b\") " pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.252340 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21ef389-3376-4802-93c1-3115af586c8b-catalog-content\") pod \"redhat-operators-b7fw8\" (UID: \"c21ef389-3376-4802-93c1-3115af586c8b\") " pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.252381 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8p8s\" (UniqueName: \"kubernetes.io/projected/c21ef389-3376-4802-93c1-3115af586c8b-kube-api-access-x8p8s\") pod \"redhat-operators-b7fw8\" (UID: \"c21ef389-3376-4802-93c1-3115af586c8b\") " pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.253619 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21ef389-3376-4802-93c1-3115af586c8b-utilities\") pod \"redhat-operators-b7fw8\" (UID: \"c21ef389-3376-4802-93c1-3115af586c8b\") " pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.253889 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21ef389-3376-4802-93c1-3115af586c8b-catalog-content\") pod \"redhat-operators-b7fw8\" (UID: \"c21ef389-3376-4802-93c1-3115af586c8b\") " pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.283430 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8p8s\" (UniqueName: \"kubernetes.io/projected/c21ef389-3376-4802-93c1-3115af586c8b-kube-api-access-x8p8s\") pod \"redhat-operators-b7fw8\" (UID: \"c21ef389-3376-4802-93c1-3115af586c8b\") " pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.372744 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.596594 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7fw8"] Jan 28 15:09:50 crc kubenswrapper[4893]: W0128 15:09:50.601779 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc21ef389_3376_4802_93c1_3115af586c8b.slice/crio-ad54dbcbd8842ab5fd8ece097bbff0065342c15556a0a5d725a0d895f614e948 WatchSource:0}: Error finding container ad54dbcbd8842ab5fd8ece097bbff0065342c15556a0a5d725a0d895f614e948: Status 404 returned error can't find the container with id ad54dbcbd8842ab5fd8ece097bbff0065342c15556a0a5d725a0d895f614e948 Jan 28 15:09:50 crc kubenswrapper[4893]: I0128 15:09:50.706074 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xlssl"] Jan 28 15:09:50 crc kubenswrapper[4893]: W0128 15:09:50.709648 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc249fd3_d895_44db_8a63_38334231d809.slice/crio-343808ae61e7e3b1a20a1cd02d67380223d80a31b1152b90e4aaa456a542b8a7 WatchSource:0}: Error finding container 343808ae61e7e3b1a20a1cd02d67380223d80a31b1152b90e4aaa456a542b8a7: Status 404 returned error can't find the container with id 343808ae61e7e3b1a20a1cd02d67380223d80a31b1152b90e4aaa456a542b8a7 Jan 28 15:09:51 crc kubenswrapper[4893]: I0128 15:09:51.440922 4893 generic.go:334] "Generic (PLEG): container finished" podID="c21ef389-3376-4802-93c1-3115af586c8b" containerID="d05dee5dbc97a443b912721dd8670ea95da5b05036c49906b69866ca727c638c" exitCode=0 Jan 28 15:09:51 crc kubenswrapper[4893]: I0128 15:09:51.441031 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7fw8" event={"ID":"c21ef389-3376-4802-93c1-3115af586c8b","Type":"ContainerDied","Data":"d05dee5dbc97a443b912721dd8670ea95da5b05036c49906b69866ca727c638c"} Jan 28 15:09:51 crc kubenswrapper[4893]: I0128 15:09:51.441064 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7fw8" event={"ID":"c21ef389-3376-4802-93c1-3115af586c8b","Type":"ContainerStarted","Data":"ad54dbcbd8842ab5fd8ece097bbff0065342c15556a0a5d725a0d895f614e948"} Jan 28 15:09:51 crc kubenswrapper[4893]: I0128 15:09:51.442582 4893 generic.go:334] "Generic (PLEG): container finished" podID="fc249fd3-d895-44db-8a63-38334231d809" containerID="0cbd22eb5fffc86e0522cc6206b8abd88020d5c1fbe06708332eaf3d09fb460c" exitCode=0 Jan 28 15:09:51 crc kubenswrapper[4893]: I0128 15:09:51.443409 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlssl" event={"ID":"fc249fd3-d895-44db-8a63-38334231d809","Type":"ContainerDied","Data":"0cbd22eb5fffc86e0522cc6206b8abd88020d5c1fbe06708332eaf3d09fb460c"} Jan 28 15:09:51 crc kubenswrapper[4893]: I0128 15:09:51.443508 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlssl" event={"ID":"fc249fd3-d895-44db-8a63-38334231d809","Type":"ContainerStarted","Data":"343808ae61e7e3b1a20a1cd02d67380223d80a31b1152b90e4aaa456a542b8a7"} Jan 28 15:09:51 crc kubenswrapper[4893]: I0128 15:09:51.445178 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.255197 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v4pdb"] Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.256682 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.258155 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.270848 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v4pdb"] Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.381171 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7-utilities\") pod \"community-operators-v4pdb\" (UID: \"5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7\") " pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.381260 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7-catalog-content\") pod \"community-operators-v4pdb\" (UID: \"5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7\") " pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.381284 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kbpg\" (UniqueName: \"kubernetes.io/projected/5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7-kube-api-access-4kbpg\") pod \"community-operators-v4pdb\" (UID: \"5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7\") " pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.455448 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jp7tn"] Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.456833 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.459527 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.463389 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7fw8" event={"ID":"c21ef389-3376-4802-93c1-3115af586c8b","Type":"ContainerStarted","Data":"fb0dd1332ab28cee0703c174e48d7a742baac191cdbf33ee3a2169f6b2ec7228"} Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.465468 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jp7tn"] Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.467434 4893 generic.go:334] "Generic (PLEG): container finished" podID="fc249fd3-d895-44db-8a63-38334231d809" containerID="9eee7de280dc2d6b99d1fdbcd74aa8ff36f8ef1a95dc7a96b0d7e8aea5fa7b74" exitCode=0 Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.467496 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlssl" event={"ID":"fc249fd3-d895-44db-8a63-38334231d809","Type":"ContainerDied","Data":"9eee7de280dc2d6b99d1fdbcd74aa8ff36f8ef1a95dc7a96b0d7e8aea5fa7b74"} Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.483138 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7-utilities\") pod \"community-operators-v4pdb\" (UID: \"5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7\") " pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.483204 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7-catalog-content\") pod \"community-operators-v4pdb\" (UID: \"5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7\") " pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.483236 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kbpg\" (UniqueName: \"kubernetes.io/projected/5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7-kube-api-access-4kbpg\") pod \"community-operators-v4pdb\" (UID: \"5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7\") " pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.483712 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7-utilities\") pod \"community-operators-v4pdb\" (UID: \"5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7\") " pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.483940 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7-catalog-content\") pod \"community-operators-v4pdb\" (UID: \"5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7\") " pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.509858 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kbpg\" (UniqueName: \"kubernetes.io/projected/5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7-kube-api-access-4kbpg\") pod \"community-operators-v4pdb\" (UID: \"5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7\") " pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.581011 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.583881 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631-utilities\") pod \"certified-operators-jp7tn\" (UID: \"4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631\") " pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.584523 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631-catalog-content\") pod \"certified-operators-jp7tn\" (UID: \"4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631\") " pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.584552 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ln7m\" (UniqueName: \"kubernetes.io/projected/4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631-kube-api-access-6ln7m\") pod \"certified-operators-jp7tn\" (UID: \"4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631\") " pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.685551 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631-catalog-content\") pod \"certified-operators-jp7tn\" (UID: \"4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631\") " pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.685618 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ln7m\" (UniqueName: \"kubernetes.io/projected/4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631-kube-api-access-6ln7m\") pod \"certified-operators-jp7tn\" (UID: \"4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631\") " pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.685703 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631-utilities\") pod \"certified-operators-jp7tn\" (UID: \"4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631\") " pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.686124 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631-catalog-content\") pod \"certified-operators-jp7tn\" (UID: \"4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631\") " pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.686147 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631-utilities\") pod \"certified-operators-jp7tn\" (UID: \"4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631\") " pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.706932 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ln7m\" (UniqueName: \"kubernetes.io/projected/4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631-kube-api-access-6ln7m\") pod \"certified-operators-jp7tn\" (UID: \"4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631\") " pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.775053 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.964185 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jp7tn"] Jan 28 15:09:52 crc kubenswrapper[4893]: I0128 15:09:52.983744 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v4pdb"] Jan 28 15:09:52 crc kubenswrapper[4893]: W0128 15:09:52.992347 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b9c92c1_9fb9_4eb9_b5e3_9b4354a34631.slice/crio-24f342261e4e2274795e65e2f7d7e42d2b29baa763fb86367fa335cc707f8748 WatchSource:0}: Error finding container 24f342261e4e2274795e65e2f7d7e42d2b29baa763fb86367fa335cc707f8748: Status 404 returned error can't find the container with id 24f342261e4e2274795e65e2f7d7e42d2b29baa763fb86367fa335cc707f8748 Jan 28 15:09:53 crc kubenswrapper[4893]: W0128 15:09:53.028526 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ecfb5b1_5ca3_4ada_a9ee_85072c22cfd7.slice/crio-6e1ee90ca13903efcc15dd753c25a9d59b100db4010bf35cb3fe8cb89099f7e8 WatchSource:0}: Error finding container 6e1ee90ca13903efcc15dd753c25a9d59b100db4010bf35cb3fe8cb89099f7e8: Status 404 returned error can't find the container with id 6e1ee90ca13903efcc15dd753c25a9d59b100db4010bf35cb3fe8cb89099f7e8 Jan 28 15:09:53 crc kubenswrapper[4893]: I0128 15:09:53.476033 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xlssl" event={"ID":"fc249fd3-d895-44db-8a63-38334231d809","Type":"ContainerStarted","Data":"0fe6cd2354d45642f2f3641d7b87a2d4c86cf02ffde95c5a4298398d9bf13e77"} Jan 28 15:09:53 crc kubenswrapper[4893]: I0128 15:09:53.477547 4893 generic.go:334] "Generic (PLEG): container finished" podID="4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631" containerID="df0b2f90697dc87c84b93c6a0fe0987c0ff3b75ec4f7b283b8892ddc4f7029b8" exitCode=0 Jan 28 15:09:53 crc kubenswrapper[4893]: I0128 15:09:53.477622 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jp7tn" event={"ID":"4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631","Type":"ContainerDied","Data":"df0b2f90697dc87c84b93c6a0fe0987c0ff3b75ec4f7b283b8892ddc4f7029b8"} Jan 28 15:09:53 crc kubenswrapper[4893]: I0128 15:09:53.477837 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jp7tn" event={"ID":"4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631","Type":"ContainerStarted","Data":"24f342261e4e2274795e65e2f7d7e42d2b29baa763fb86367fa335cc707f8748"} Jan 28 15:09:53 crc kubenswrapper[4893]: I0128 15:09:53.480538 4893 generic.go:334] "Generic (PLEG): container finished" podID="c21ef389-3376-4802-93c1-3115af586c8b" containerID="fb0dd1332ab28cee0703c174e48d7a742baac191cdbf33ee3a2169f6b2ec7228" exitCode=0 Jan 28 15:09:53 crc kubenswrapper[4893]: I0128 15:09:53.480596 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7fw8" event={"ID":"c21ef389-3376-4802-93c1-3115af586c8b","Type":"ContainerDied","Data":"fb0dd1332ab28cee0703c174e48d7a742baac191cdbf33ee3a2169f6b2ec7228"} Jan 28 15:09:53 crc kubenswrapper[4893]: I0128 15:09:53.483451 4893 generic.go:334] "Generic (PLEG): container finished" podID="5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7" containerID="684f15c3d48248a1b763bd60cb35889749f700fa6ac232dc215cbefda55fa7a1" exitCode=0 Jan 28 15:09:53 crc kubenswrapper[4893]: I0128 15:09:53.483502 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4pdb" event={"ID":"5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7","Type":"ContainerDied","Data":"684f15c3d48248a1b763bd60cb35889749f700fa6ac232dc215cbefda55fa7a1"} Jan 28 15:09:53 crc kubenswrapper[4893]: I0128 15:09:53.483533 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4pdb" event={"ID":"5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7","Type":"ContainerStarted","Data":"6e1ee90ca13903efcc15dd753c25a9d59b100db4010bf35cb3fe8cb89099f7e8"} Jan 28 15:09:53 crc kubenswrapper[4893]: I0128 15:09:53.499653 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xlssl" podStartSLOduration=3.031619619 podStartE2EDuration="4.49963704s" podCreationTimestamp="2026-01-28 15:09:49 +0000 UTC" firstStartedPulling="2026-01-28 15:09:51.44487836 +0000 UTC m=+509.218493388" lastFinishedPulling="2026-01-28 15:09:52.912895781 +0000 UTC m=+510.686510809" observedRunningTime="2026-01-28 15:09:53.499232199 +0000 UTC m=+511.272847247" watchObservedRunningTime="2026-01-28 15:09:53.49963704 +0000 UTC m=+511.273252068" Jan 28 15:09:54 crc kubenswrapper[4893]: I0128 15:09:54.491140 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jp7tn" event={"ID":"4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631","Type":"ContainerStarted","Data":"3b3711c97eec086271f96d75bcb49f3440eb571d438efedc658d3ab8f05cb45d"} Jan 28 15:09:54 crc kubenswrapper[4893]: I0128 15:09:54.494561 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7fw8" event={"ID":"c21ef389-3376-4802-93c1-3115af586c8b","Type":"ContainerStarted","Data":"22f0532b966c060ac0ad858e068af00c39fd6a516bcc1dceb5bb8dedd613f315"} Jan 28 15:09:54 crc kubenswrapper[4893]: I0128 15:09:54.496589 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4pdb" event={"ID":"5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7","Type":"ContainerStarted","Data":"fa2bbb03b9b99ac21153e29ff49aadba6523225a7556e6c84a5ebb130f569614"} Jan 28 15:09:54 crc kubenswrapper[4893]: I0128 15:09:54.537086 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b7fw8" podStartSLOduration=2.047457294 podStartE2EDuration="4.537057326s" podCreationTimestamp="2026-01-28 15:09:50 +0000 UTC" firstStartedPulling="2026-01-28 15:09:51.442989308 +0000 UTC m=+509.216604346" lastFinishedPulling="2026-01-28 15:09:53.93258934 +0000 UTC m=+511.706204378" observedRunningTime="2026-01-28 15:09:54.536645445 +0000 UTC m=+512.310260473" watchObservedRunningTime="2026-01-28 15:09:54.537057326 +0000 UTC m=+512.310672354" Jan 28 15:09:55 crc kubenswrapper[4893]: I0128 15:09:55.506102 4893 generic.go:334] "Generic (PLEG): container finished" podID="4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631" containerID="3b3711c97eec086271f96d75bcb49f3440eb571d438efedc658d3ab8f05cb45d" exitCode=0 Jan 28 15:09:55 crc kubenswrapper[4893]: I0128 15:09:55.506178 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jp7tn" event={"ID":"4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631","Type":"ContainerDied","Data":"3b3711c97eec086271f96d75bcb49f3440eb571d438efedc658d3ab8f05cb45d"} Jan 28 15:09:55 crc kubenswrapper[4893]: I0128 15:09:55.508356 4893 generic.go:334] "Generic (PLEG): container finished" podID="5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7" containerID="fa2bbb03b9b99ac21153e29ff49aadba6523225a7556e6c84a5ebb130f569614" exitCode=0 Jan 28 15:09:55 crc kubenswrapper[4893]: I0128 15:09:55.509015 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4pdb" event={"ID":"5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7","Type":"ContainerDied","Data":"fa2bbb03b9b99ac21153e29ff49aadba6523225a7556e6c84a5ebb130f569614"} Jan 28 15:09:56 crc kubenswrapper[4893]: I0128 15:09:56.518803 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v4pdb" event={"ID":"5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7","Type":"ContainerStarted","Data":"54be0af3ffa98f00a608b25e741abbfacae09d0714ebc2fbc3b28e76091a595e"} Jan 28 15:09:56 crc kubenswrapper[4893]: I0128 15:09:56.521341 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jp7tn" event={"ID":"4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631","Type":"ContainerStarted","Data":"69725eff3270503550f5b8485c34bfcbd018dfdc65069459e5ee30c40897bc9e"} Jan 28 15:09:56 crc kubenswrapper[4893]: I0128 15:09:56.568795 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v4pdb" podStartSLOduration=2.174167219 podStartE2EDuration="4.56876637s" podCreationTimestamp="2026-01-28 15:09:52 +0000 UTC" firstStartedPulling="2026-01-28 15:09:53.486452721 +0000 UTC m=+511.260067749" lastFinishedPulling="2026-01-28 15:09:55.881051882 +0000 UTC m=+513.654666900" observedRunningTime="2026-01-28 15:09:56.544337016 +0000 UTC m=+514.317952064" watchObservedRunningTime="2026-01-28 15:09:56.56876637 +0000 UTC m=+514.342381398" Jan 28 15:09:56 crc kubenswrapper[4893]: I0128 15:09:56.569458 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jp7tn" podStartSLOduration=2.057743717 podStartE2EDuration="4.56945273s" podCreationTimestamp="2026-01-28 15:09:52 +0000 UTC" firstStartedPulling="2026-01-28 15:09:53.478727124 +0000 UTC m=+511.252342152" lastFinishedPulling="2026-01-28 15:09:55.990436137 +0000 UTC m=+513.764051165" observedRunningTime="2026-01-28 15:09:56.565034946 +0000 UTC m=+514.338649974" watchObservedRunningTime="2026-01-28 15:09:56.56945273 +0000 UTC m=+514.343067748" Jan 28 15:10:00 crc kubenswrapper[4893]: I0128 15:10:00.230530 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:10:00 crc kubenswrapper[4893]: I0128 15:10:00.231026 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:10:00 crc kubenswrapper[4893]: I0128 15:10:00.276276 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:10:00 crc kubenswrapper[4893]: I0128 15:10:00.374128 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:10:00 crc kubenswrapper[4893]: I0128 15:10:00.374527 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:10:00 crc kubenswrapper[4893]: I0128 15:10:00.415179 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:10:00 crc kubenswrapper[4893]: I0128 15:10:00.584428 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xlssl" Jan 28 15:10:00 crc kubenswrapper[4893]: I0128 15:10:00.584537 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:10:02 crc kubenswrapper[4893]: I0128 15:10:02.581603 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:10:02 crc kubenswrapper[4893]: I0128 15:10:02.581696 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:10:02 crc kubenswrapper[4893]: I0128 15:10:02.620105 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:10:02 crc kubenswrapper[4893]: I0128 15:10:02.776893 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:10:02 crc kubenswrapper[4893]: I0128 15:10:02.777259 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:10:02 crc kubenswrapper[4893]: I0128 15:10:02.819025 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:10:03 crc kubenswrapper[4893]: I0128 15:10:03.596123 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jp7tn" Jan 28 15:10:03 crc kubenswrapper[4893]: I0128 15:10:03.596735 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v4pdb" Jan 28 15:11:05 crc kubenswrapper[4893]: I0128 15:11:05.722926 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:11:05 crc kubenswrapper[4893]: I0128 15:11:05.723416 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:11:35 crc kubenswrapper[4893]: I0128 15:11:35.722849 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:11:35 crc kubenswrapper[4893]: I0128 15:11:35.723442 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:12:05 crc kubenswrapper[4893]: I0128 15:12:05.722938 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:12:05 crc kubenswrapper[4893]: I0128 15:12:05.724065 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:12:05 crc kubenswrapper[4893]: I0128 15:12:05.724144 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:12:05 crc kubenswrapper[4893]: I0128 15:12:05.725092 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"194444ae046498e08f8f911d426d18ad3d7857b481964cff9f834815e3198cff"} pod="openshift-machine-config-operator/machine-config-daemon-l2nht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:12:05 crc kubenswrapper[4893]: I0128 15:12:05.725170 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" containerID="cri-o://194444ae046498e08f8f911d426d18ad3d7857b481964cff9f834815e3198cff" gracePeriod=600 Jan 28 15:12:06 crc kubenswrapper[4893]: I0128 15:12:06.294186 4893 generic.go:334] "Generic (PLEG): container finished" podID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerID="194444ae046498e08f8f911d426d18ad3d7857b481964cff9f834815e3198cff" exitCode=0 Jan 28 15:12:06 crc kubenswrapper[4893]: I0128 15:12:06.294264 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerDied","Data":"194444ae046498e08f8f911d426d18ad3d7857b481964cff9f834815e3198cff"} Jan 28 15:12:06 crc kubenswrapper[4893]: I0128 15:12:06.294717 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerStarted","Data":"3ced3cae3c54b613ab0ce9fe21bab2e1babeff0d0bb895261d140f95238422f3"} Jan 28 15:12:06 crc kubenswrapper[4893]: I0128 15:12:06.294749 4893 scope.go:117] "RemoveContainer" containerID="20d8eb6fb2ed649557150caacec59c356900810bc0df5c731a7427a65b6878f0" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.622564 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-chgnp"] Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.624030 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.647398 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-chgnp"] Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.790859 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8974058f-50c2-4724-89fa-05b29d69ea8c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.790904 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8974058f-50c2-4724-89fa-05b29d69ea8c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.790929 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8974058f-50c2-4724-89fa-05b29d69ea8c-trusted-ca\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.790957 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8974058f-50c2-4724-89fa-05b29d69ea8c-bound-sa-token\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.791141 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w58cr\" (UniqueName: \"kubernetes.io/projected/8974058f-50c2-4724-89fa-05b29d69ea8c-kube-api-access-w58cr\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.791201 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8974058f-50c2-4724-89fa-05b29d69ea8c-registry-tls\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.791310 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8974058f-50c2-4724-89fa-05b29d69ea8c-registry-certificates\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.791414 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.811557 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.892158 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8974058f-50c2-4724-89fa-05b29d69ea8c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.892813 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8974058f-50c2-4724-89fa-05b29d69ea8c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.892876 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8974058f-50c2-4724-89fa-05b29d69ea8c-trusted-ca\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.892948 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8974058f-50c2-4724-89fa-05b29d69ea8c-bound-sa-token\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.893055 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w58cr\" (UniqueName: \"kubernetes.io/projected/8974058f-50c2-4724-89fa-05b29d69ea8c-kube-api-access-w58cr\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.893108 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8974058f-50c2-4724-89fa-05b29d69ea8c-registry-tls\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.893181 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8974058f-50c2-4724-89fa-05b29d69ea8c-registry-certificates\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.893359 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8974058f-50c2-4724-89fa-05b29d69ea8c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.894097 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8974058f-50c2-4724-89fa-05b29d69ea8c-trusted-ca\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.894790 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8974058f-50c2-4724-89fa-05b29d69ea8c-registry-certificates\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.909876 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8974058f-50c2-4724-89fa-05b29d69ea8c-registry-tls\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.909878 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8974058f-50c2-4724-89fa-05b29d69ea8c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.921760 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8974058f-50c2-4724-89fa-05b29d69ea8c-bound-sa-token\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.931425 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w58cr\" (UniqueName: \"kubernetes.io/projected/8974058f-50c2-4724-89fa-05b29d69ea8c-kube-api-access-w58cr\") pod \"image-registry-66df7c8f76-chgnp\" (UID: \"8974058f-50c2-4724-89fa-05b29d69ea8c\") " pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:50 crc kubenswrapper[4893]: I0128 15:12:50.954176 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:51 crc kubenswrapper[4893]: I0128 15:12:51.132023 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-chgnp"] Jan 28 15:12:51 crc kubenswrapper[4893]: I0128 15:12:51.672003 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" event={"ID":"8974058f-50c2-4724-89fa-05b29d69ea8c","Type":"ContainerStarted","Data":"56313215ece2055934b36c075d2b30da97924fe0aa9d33155d90bcfdecbe5924"} Jan 28 15:12:51 crc kubenswrapper[4893]: I0128 15:12:51.672492 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:12:51 crc kubenswrapper[4893]: I0128 15:12:51.672505 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" event={"ID":"8974058f-50c2-4724-89fa-05b29d69ea8c","Type":"ContainerStarted","Data":"35b983049d7ad8a20481f0592a53dc8b67df3348a529247b2836bbed8bdf07f2"} Jan 28 15:12:51 crc kubenswrapper[4893]: I0128 15:12:51.693243 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" podStartSLOduration=1.693224691 podStartE2EDuration="1.693224691s" podCreationTimestamp="2026-01-28 15:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:12:51.689369053 +0000 UTC m=+689.462984081" watchObservedRunningTime="2026-01-28 15:12:51.693224691 +0000 UTC m=+689.466839719" Jan 28 15:13:10 crc kubenswrapper[4893]: I0128 15:13:10.959158 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-chgnp" Jan 28 15:13:11 crc kubenswrapper[4893]: I0128 15:13:11.023016 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g2dcn"] Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.057252 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" podUID="e54303a1-baec-46eb-92e9-9beeca76bb98" containerName="registry" containerID="cri-o://3f9ec4a163c1a671ea2ebe5d58822a1bde9d10f2810236b5a3046528d5aac46b" gracePeriod=30 Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.397308 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.444780 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e54303a1-baec-46eb-92e9-9beeca76bb98-registry-certificates\") pod \"e54303a1-baec-46eb-92e9-9beeca76bb98\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.444840 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e54303a1-baec-46eb-92e9-9beeca76bb98-ca-trust-extracted\") pod \"e54303a1-baec-46eb-92e9-9beeca76bb98\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.444877 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e54303a1-baec-46eb-92e9-9beeca76bb98-trusted-ca\") pod \"e54303a1-baec-46eb-92e9-9beeca76bb98\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.444895 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-registry-tls\") pod \"e54303a1-baec-46eb-92e9-9beeca76bb98\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.444927 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-bound-sa-token\") pod \"e54303a1-baec-46eb-92e9-9beeca76bb98\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.444950 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e54303a1-baec-46eb-92e9-9beeca76bb98-installation-pull-secrets\") pod \"e54303a1-baec-46eb-92e9-9beeca76bb98\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.444969 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb9pb\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-kube-api-access-sb9pb\") pod \"e54303a1-baec-46eb-92e9-9beeca76bb98\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.445134 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"e54303a1-baec-46eb-92e9-9beeca76bb98\" (UID: \"e54303a1-baec-46eb-92e9-9beeca76bb98\") " Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.445583 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54303a1-baec-46eb-92e9-9beeca76bb98-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e54303a1-baec-46eb-92e9-9beeca76bb98" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.446660 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e54303a1-baec-46eb-92e9-9beeca76bb98-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e54303a1-baec-46eb-92e9-9beeca76bb98" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.451840 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-kube-api-access-sb9pb" (OuterVolumeSpecName: "kube-api-access-sb9pb") pod "e54303a1-baec-46eb-92e9-9beeca76bb98" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98"). InnerVolumeSpecName "kube-api-access-sb9pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.452579 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e54303a1-baec-46eb-92e9-9beeca76bb98-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e54303a1-baec-46eb-92e9-9beeca76bb98" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.452804 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e54303a1-baec-46eb-92e9-9beeca76bb98" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.456637 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e54303a1-baec-46eb-92e9-9beeca76bb98" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.461850 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "e54303a1-baec-46eb-92e9-9beeca76bb98" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.462440 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e54303a1-baec-46eb-92e9-9beeca76bb98-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e54303a1-baec-46eb-92e9-9beeca76bb98" (UID: "e54303a1-baec-46eb-92e9-9beeca76bb98"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.545991 4893 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e54303a1-baec-46eb-92e9-9beeca76bb98-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.546045 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e54303a1-baec-46eb-92e9-9beeca76bb98-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.546058 4893 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.546068 4893 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.546079 4893 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e54303a1-baec-46eb-92e9-9beeca76bb98-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.546093 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb9pb\" (UniqueName: \"kubernetes.io/projected/e54303a1-baec-46eb-92e9-9beeca76bb98-kube-api-access-sb9pb\") on node \"crc\" DevicePath \"\"" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.546104 4893 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e54303a1-baec-46eb-92e9-9beeca76bb98-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.918994 4893 generic.go:334] "Generic (PLEG): container finished" podID="e54303a1-baec-46eb-92e9-9beeca76bb98" containerID="3f9ec4a163c1a671ea2ebe5d58822a1bde9d10f2810236b5a3046528d5aac46b" exitCode=0 Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.919047 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" event={"ID":"e54303a1-baec-46eb-92e9-9beeca76bb98","Type":"ContainerDied","Data":"3f9ec4a163c1a671ea2ebe5d58822a1bde9d10f2810236b5a3046528d5aac46b"} Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.919084 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" event={"ID":"e54303a1-baec-46eb-92e9-9beeca76bb98","Type":"ContainerDied","Data":"63301dc75a213ff6c57a16ef100e7fd6aee789410a7d7a0d560a9b49f5e6b372"} Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.919103 4893 scope.go:117] "RemoveContainer" containerID="3f9ec4a163c1a671ea2ebe5d58822a1bde9d10f2810236b5a3046528d5aac46b" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.919272 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g2dcn" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.934202 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g2dcn"] Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.939131 4893 scope.go:117] "RemoveContainer" containerID="3f9ec4a163c1a671ea2ebe5d58822a1bde9d10f2810236b5a3046528d5aac46b" Jan 28 15:13:36 crc kubenswrapper[4893]: E0128 15:13:36.939731 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f9ec4a163c1a671ea2ebe5d58822a1bde9d10f2810236b5a3046528d5aac46b\": container with ID starting with 3f9ec4a163c1a671ea2ebe5d58822a1bde9d10f2810236b5a3046528d5aac46b not found: ID does not exist" containerID="3f9ec4a163c1a671ea2ebe5d58822a1bde9d10f2810236b5a3046528d5aac46b" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.939855 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f9ec4a163c1a671ea2ebe5d58822a1bde9d10f2810236b5a3046528d5aac46b"} err="failed to get container status \"3f9ec4a163c1a671ea2ebe5d58822a1bde9d10f2810236b5a3046528d5aac46b\": rpc error: code = NotFound desc = could not find container \"3f9ec4a163c1a671ea2ebe5d58822a1bde9d10f2810236b5a3046528d5aac46b\": container with ID starting with 3f9ec4a163c1a671ea2ebe5d58822a1bde9d10f2810236b5a3046528d5aac46b not found: ID does not exist" Jan 28 15:13:36 crc kubenswrapper[4893]: I0128 15:13:36.940925 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g2dcn"] Jan 28 15:13:38 crc kubenswrapper[4893]: I0128 15:13:38.898415 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e54303a1-baec-46eb-92e9-9beeca76bb98" path="/var/lib/kubelet/pods/e54303a1-baec-46eb-92e9-9beeca76bb98/volumes" Jan 28 15:13:54 crc kubenswrapper[4893]: I0128 15:13:54.140480 4893 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:14:35 crc kubenswrapper[4893]: I0128 15:14:35.722179 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:14:35 crc kubenswrapper[4893]: I0128 15:14:35.722769 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.174424 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c"] Jan 28 15:15:00 crc kubenswrapper[4893]: E0128 15:15:00.175923 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e54303a1-baec-46eb-92e9-9beeca76bb98" containerName="registry" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.175954 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e54303a1-baec-46eb-92e9-9beeca76bb98" containerName="registry" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.176154 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="e54303a1-baec-46eb-92e9-9beeca76bb98" containerName="registry" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.176938 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.179345 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.179910 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.186134 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c"] Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.271510 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3044b30f-5a83-40de-8232-3d9d39b315ca-config-volume\") pod \"collect-profiles-29493555-6852c\" (UID: \"3044b30f-5a83-40de-8232-3d9d39b315ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.272041 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26sqj\" (UniqueName: \"kubernetes.io/projected/3044b30f-5a83-40de-8232-3d9d39b315ca-kube-api-access-26sqj\") pod \"collect-profiles-29493555-6852c\" (UID: \"3044b30f-5a83-40de-8232-3d9d39b315ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.272072 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3044b30f-5a83-40de-8232-3d9d39b315ca-secret-volume\") pod \"collect-profiles-29493555-6852c\" (UID: \"3044b30f-5a83-40de-8232-3d9d39b315ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.373499 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26sqj\" (UniqueName: \"kubernetes.io/projected/3044b30f-5a83-40de-8232-3d9d39b315ca-kube-api-access-26sqj\") pod \"collect-profiles-29493555-6852c\" (UID: \"3044b30f-5a83-40de-8232-3d9d39b315ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.373562 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3044b30f-5a83-40de-8232-3d9d39b315ca-secret-volume\") pod \"collect-profiles-29493555-6852c\" (UID: \"3044b30f-5a83-40de-8232-3d9d39b315ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.373602 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3044b30f-5a83-40de-8232-3d9d39b315ca-config-volume\") pod \"collect-profiles-29493555-6852c\" (UID: \"3044b30f-5a83-40de-8232-3d9d39b315ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.374609 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3044b30f-5a83-40de-8232-3d9d39b315ca-config-volume\") pod \"collect-profiles-29493555-6852c\" (UID: \"3044b30f-5a83-40de-8232-3d9d39b315ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.382958 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3044b30f-5a83-40de-8232-3d9d39b315ca-secret-volume\") pod \"collect-profiles-29493555-6852c\" (UID: \"3044b30f-5a83-40de-8232-3d9d39b315ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.397759 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26sqj\" (UniqueName: \"kubernetes.io/projected/3044b30f-5a83-40de-8232-3d9d39b315ca-kube-api-access-26sqj\") pod \"collect-profiles-29493555-6852c\" (UID: \"3044b30f-5a83-40de-8232-3d9d39b315ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.504038 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" Jan 28 15:15:00 crc kubenswrapper[4893]: I0128 15:15:00.701153 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c"] Jan 28 15:15:01 crc kubenswrapper[4893]: I0128 15:15:01.448223 4893 generic.go:334] "Generic (PLEG): container finished" podID="3044b30f-5a83-40de-8232-3d9d39b315ca" containerID="c58e93044f4c70d76e786e3be1d25a20d0e23541d0f59eafc8f1c23fa26003c1" exitCode=0 Jan 28 15:15:01 crc kubenswrapper[4893]: I0128 15:15:01.448300 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" event={"ID":"3044b30f-5a83-40de-8232-3d9d39b315ca","Type":"ContainerDied","Data":"c58e93044f4c70d76e786e3be1d25a20d0e23541d0f59eafc8f1c23fa26003c1"} Jan 28 15:15:01 crc kubenswrapper[4893]: I0128 15:15:01.448350 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" event={"ID":"3044b30f-5a83-40de-8232-3d9d39b315ca","Type":"ContainerStarted","Data":"7fd20837c7c88440d9ea3a7373b16a47b9fed58d3577f47af6c20416f9bafacb"} Jan 28 15:15:02 crc kubenswrapper[4893]: I0128 15:15:02.683455 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" Jan 28 15:15:02 crc kubenswrapper[4893]: I0128 15:15:02.805247 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3044b30f-5a83-40de-8232-3d9d39b315ca-secret-volume\") pod \"3044b30f-5a83-40de-8232-3d9d39b315ca\" (UID: \"3044b30f-5a83-40de-8232-3d9d39b315ca\") " Jan 28 15:15:02 crc kubenswrapper[4893]: I0128 15:15:02.805321 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26sqj\" (UniqueName: \"kubernetes.io/projected/3044b30f-5a83-40de-8232-3d9d39b315ca-kube-api-access-26sqj\") pod \"3044b30f-5a83-40de-8232-3d9d39b315ca\" (UID: \"3044b30f-5a83-40de-8232-3d9d39b315ca\") " Jan 28 15:15:02 crc kubenswrapper[4893]: I0128 15:15:02.805439 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3044b30f-5a83-40de-8232-3d9d39b315ca-config-volume\") pod \"3044b30f-5a83-40de-8232-3d9d39b315ca\" (UID: \"3044b30f-5a83-40de-8232-3d9d39b315ca\") " Jan 28 15:15:02 crc kubenswrapper[4893]: I0128 15:15:02.806267 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3044b30f-5a83-40de-8232-3d9d39b315ca-config-volume" (OuterVolumeSpecName: "config-volume") pod "3044b30f-5a83-40de-8232-3d9d39b315ca" (UID: "3044b30f-5a83-40de-8232-3d9d39b315ca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:15:02 crc kubenswrapper[4893]: I0128 15:15:02.813681 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3044b30f-5a83-40de-8232-3d9d39b315ca-kube-api-access-26sqj" (OuterVolumeSpecName: "kube-api-access-26sqj") pod "3044b30f-5a83-40de-8232-3d9d39b315ca" (UID: "3044b30f-5a83-40de-8232-3d9d39b315ca"). InnerVolumeSpecName "kube-api-access-26sqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:15:02 crc kubenswrapper[4893]: I0128 15:15:02.817942 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3044b30f-5a83-40de-8232-3d9d39b315ca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3044b30f-5a83-40de-8232-3d9d39b315ca" (UID: "3044b30f-5a83-40de-8232-3d9d39b315ca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:15:02 crc kubenswrapper[4893]: I0128 15:15:02.906918 4893 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3044b30f-5a83-40de-8232-3d9d39b315ca-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:02 crc kubenswrapper[4893]: I0128 15:15:02.907275 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26sqj\" (UniqueName: \"kubernetes.io/projected/3044b30f-5a83-40de-8232-3d9d39b315ca-kube-api-access-26sqj\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:02 crc kubenswrapper[4893]: I0128 15:15:02.907288 4893 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3044b30f-5a83-40de-8232-3d9d39b315ca-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:03 crc kubenswrapper[4893]: I0128 15:15:03.468334 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" event={"ID":"3044b30f-5a83-40de-8232-3d9d39b315ca","Type":"ContainerDied","Data":"7fd20837c7c88440d9ea3a7373b16a47b9fed58d3577f47af6c20416f9bafacb"} Jan 28 15:15:03 crc kubenswrapper[4893]: I0128 15:15:03.468389 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fd20837c7c88440d9ea3a7373b16a47b9fed58d3577f47af6c20416f9bafacb" Jan 28 15:15:03 crc kubenswrapper[4893]: I0128 15:15:03.468591 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-6852c" Jan 28 15:15:05 crc kubenswrapper[4893]: I0128 15:15:05.723961 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:15:05 crc kubenswrapper[4893]: I0128 15:15:05.724025 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.532314 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn"] Jan 28 15:15:20 crc kubenswrapper[4893]: E0128 15:15:20.533066 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3044b30f-5a83-40de-8232-3d9d39b315ca" containerName="collect-profiles" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.533082 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3044b30f-5a83-40de-8232-3d9d39b315ca" containerName="collect-profiles" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.533188 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3044b30f-5a83-40de-8232-3d9d39b315ca" containerName="collect-profiles" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.534027 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.536014 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.544964 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn"] Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.565282 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da66fae5-bc9b-49b3-8ed8-729a9f353b67-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn\" (UID: \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.565370 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da66fae5-bc9b-49b3-8ed8-729a9f353b67-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn\" (UID: \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.565546 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpbl6\" (UniqueName: \"kubernetes.io/projected/da66fae5-bc9b-49b3-8ed8-729a9f353b67-kube-api-access-wpbl6\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn\" (UID: \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.665998 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da66fae5-bc9b-49b3-8ed8-729a9f353b67-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn\" (UID: \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.666067 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpbl6\" (UniqueName: \"kubernetes.io/projected/da66fae5-bc9b-49b3-8ed8-729a9f353b67-kube-api-access-wpbl6\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn\" (UID: \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.666200 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da66fae5-bc9b-49b3-8ed8-729a9f353b67-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn\" (UID: \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.666668 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da66fae5-bc9b-49b3-8ed8-729a9f353b67-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn\" (UID: \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.666680 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da66fae5-bc9b-49b3-8ed8-729a9f353b67-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn\" (UID: \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.684651 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpbl6\" (UniqueName: \"kubernetes.io/projected/da66fae5-bc9b-49b3-8ed8-729a9f353b67-kube-api-access-wpbl6\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn\" (UID: \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" Jan 28 15:15:20 crc kubenswrapper[4893]: I0128 15:15:20.852408 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" Jan 28 15:15:21 crc kubenswrapper[4893]: I0128 15:15:21.043633 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn"] Jan 28 15:15:21 crc kubenswrapper[4893]: W0128 15:15:21.052671 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda66fae5_bc9b_49b3_8ed8_729a9f353b67.slice/crio-ee09317cc695efd26aa05159fa193ca057d05a012e725464fdd7667c1b85bc0a WatchSource:0}: Error finding container ee09317cc695efd26aa05159fa193ca057d05a012e725464fdd7667c1b85bc0a: Status 404 returned error can't find the container with id ee09317cc695efd26aa05159fa193ca057d05a012e725464fdd7667c1b85bc0a Jan 28 15:15:21 crc kubenswrapper[4893]: I0128 15:15:21.582976 4893 generic.go:334] "Generic (PLEG): container finished" podID="da66fae5-bc9b-49b3-8ed8-729a9f353b67" containerID="01cadb840f9098df858d6231a35f47c9af5c3e75dd58f9aa13582ce36afc878f" exitCode=0 Jan 28 15:15:21 crc kubenswrapper[4893]: I0128 15:15:21.583037 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" event={"ID":"da66fae5-bc9b-49b3-8ed8-729a9f353b67","Type":"ContainerDied","Data":"01cadb840f9098df858d6231a35f47c9af5c3e75dd58f9aa13582ce36afc878f"} Jan 28 15:15:21 crc kubenswrapper[4893]: I0128 15:15:21.583072 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" event={"ID":"da66fae5-bc9b-49b3-8ed8-729a9f353b67","Type":"ContainerStarted","Data":"ee09317cc695efd26aa05159fa193ca057d05a012e725464fdd7667c1b85bc0a"} Jan 28 15:15:21 crc kubenswrapper[4893]: I0128 15:15:21.587013 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:15:22 crc kubenswrapper[4893]: I0128 15:15:22.658835 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6qxps"] Jan 28 15:15:22 crc kubenswrapper[4893]: I0128 15:15:22.660244 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:22 crc kubenswrapper[4893]: I0128 15:15:22.674937 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6qxps"] Jan 28 15:15:22 crc kubenswrapper[4893]: I0128 15:15:22.691154 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d191c46f-7ccf-4dc8-a2c8-477724435ff0-utilities\") pod \"redhat-operators-6qxps\" (UID: \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\") " pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:22 crc kubenswrapper[4893]: I0128 15:15:22.691246 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w86hr\" (UniqueName: \"kubernetes.io/projected/d191c46f-7ccf-4dc8-a2c8-477724435ff0-kube-api-access-w86hr\") pod \"redhat-operators-6qxps\" (UID: \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\") " pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:22 crc kubenswrapper[4893]: I0128 15:15:22.691273 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d191c46f-7ccf-4dc8-a2c8-477724435ff0-catalog-content\") pod \"redhat-operators-6qxps\" (UID: \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\") " pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:22 crc kubenswrapper[4893]: I0128 15:15:22.791908 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w86hr\" (UniqueName: \"kubernetes.io/projected/d191c46f-7ccf-4dc8-a2c8-477724435ff0-kube-api-access-w86hr\") pod \"redhat-operators-6qxps\" (UID: \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\") " pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:22 crc kubenswrapper[4893]: I0128 15:15:22.791967 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d191c46f-7ccf-4dc8-a2c8-477724435ff0-catalog-content\") pod \"redhat-operators-6qxps\" (UID: \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\") " pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:22 crc kubenswrapper[4893]: I0128 15:15:22.792033 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d191c46f-7ccf-4dc8-a2c8-477724435ff0-utilities\") pod \"redhat-operators-6qxps\" (UID: \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\") " pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:22 crc kubenswrapper[4893]: I0128 15:15:22.792595 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d191c46f-7ccf-4dc8-a2c8-477724435ff0-utilities\") pod \"redhat-operators-6qxps\" (UID: \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\") " pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:22 crc kubenswrapper[4893]: I0128 15:15:22.793068 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d191c46f-7ccf-4dc8-a2c8-477724435ff0-catalog-content\") pod \"redhat-operators-6qxps\" (UID: \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\") " pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:22 crc kubenswrapper[4893]: I0128 15:15:22.818124 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w86hr\" (UniqueName: \"kubernetes.io/projected/d191c46f-7ccf-4dc8-a2c8-477724435ff0-kube-api-access-w86hr\") pod \"redhat-operators-6qxps\" (UID: \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\") " pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:22 crc kubenswrapper[4893]: I0128 15:15:22.980024 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:23 crc kubenswrapper[4893]: I0128 15:15:23.193334 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6qxps"] Jan 28 15:15:23 crc kubenswrapper[4893]: I0128 15:15:23.604959 4893 generic.go:334] "Generic (PLEG): container finished" podID="d191c46f-7ccf-4dc8-a2c8-477724435ff0" containerID="c31b780ed1f5bdea01d642cf815504d2b93b6af5be22ede0251dff03b6fb4b69" exitCode=0 Jan 28 15:15:23 crc kubenswrapper[4893]: I0128 15:15:23.605026 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qxps" event={"ID":"d191c46f-7ccf-4dc8-a2c8-477724435ff0","Type":"ContainerDied","Data":"c31b780ed1f5bdea01d642cf815504d2b93b6af5be22ede0251dff03b6fb4b69"} Jan 28 15:15:23 crc kubenswrapper[4893]: I0128 15:15:23.605057 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qxps" event={"ID":"d191c46f-7ccf-4dc8-a2c8-477724435ff0","Type":"ContainerStarted","Data":"ad435714b1828298e4422a5210bed16e7137c6b6bc5dace18416967c8871851f"} Jan 28 15:15:23 crc kubenswrapper[4893]: I0128 15:15:23.608800 4893 generic.go:334] "Generic (PLEG): container finished" podID="da66fae5-bc9b-49b3-8ed8-729a9f353b67" containerID="0afd82d81fc051d829f8c360b7a7d2f27eedbd9207cfcca9499bbe7ee28653e5" exitCode=0 Jan 28 15:15:23 crc kubenswrapper[4893]: I0128 15:15:23.608854 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" event={"ID":"da66fae5-bc9b-49b3-8ed8-729a9f353b67","Type":"ContainerDied","Data":"0afd82d81fc051d829f8c360b7a7d2f27eedbd9207cfcca9499bbe7ee28653e5"} Jan 28 15:15:24 crc kubenswrapper[4893]: I0128 15:15:24.617882 4893 generic.go:334] "Generic (PLEG): container finished" podID="da66fae5-bc9b-49b3-8ed8-729a9f353b67" containerID="39fe177fbca60f04fe3e6db375ac4df3d3432cf235716c61a1468613f23de31a" exitCode=0 Jan 28 15:15:24 crc kubenswrapper[4893]: I0128 15:15:24.618004 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" event={"ID":"da66fae5-bc9b-49b3-8ed8-729a9f353b67","Type":"ContainerDied","Data":"39fe177fbca60f04fe3e6db375ac4df3d3432cf235716c61a1468613f23de31a"} Jan 28 15:15:24 crc kubenswrapper[4893]: I0128 15:15:24.622957 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qxps" event={"ID":"d191c46f-7ccf-4dc8-a2c8-477724435ff0","Type":"ContainerStarted","Data":"a864d74b4ed038e2b08429de4141dcd9dbfb4fb13c6cb25a63d63d727fb5ea7d"} Jan 28 15:15:25 crc kubenswrapper[4893]: I0128 15:15:25.628905 4893 generic.go:334] "Generic (PLEG): container finished" podID="d191c46f-7ccf-4dc8-a2c8-477724435ff0" containerID="a864d74b4ed038e2b08429de4141dcd9dbfb4fb13c6cb25a63d63d727fb5ea7d" exitCode=0 Jan 28 15:15:25 crc kubenswrapper[4893]: I0128 15:15:25.628974 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qxps" event={"ID":"d191c46f-7ccf-4dc8-a2c8-477724435ff0","Type":"ContainerDied","Data":"a864d74b4ed038e2b08429de4141dcd9dbfb4fb13c6cb25a63d63d727fb5ea7d"} Jan 28 15:15:25 crc kubenswrapper[4893]: I0128 15:15:25.822953 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" Jan 28 15:15:25 crc kubenswrapper[4893]: I0128 15:15:25.951859 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da66fae5-bc9b-49b3-8ed8-729a9f353b67-bundle\") pod \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\" (UID: \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\") " Jan 28 15:15:25 crc kubenswrapper[4893]: I0128 15:15:25.951940 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da66fae5-bc9b-49b3-8ed8-729a9f353b67-util\") pod \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\" (UID: \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\") " Jan 28 15:15:25 crc kubenswrapper[4893]: I0128 15:15:25.952021 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpbl6\" (UniqueName: \"kubernetes.io/projected/da66fae5-bc9b-49b3-8ed8-729a9f353b67-kube-api-access-wpbl6\") pod \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\" (UID: \"da66fae5-bc9b-49b3-8ed8-729a9f353b67\") " Jan 28 15:15:25 crc kubenswrapper[4893]: I0128 15:15:25.952441 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da66fae5-bc9b-49b3-8ed8-729a9f353b67-bundle" (OuterVolumeSpecName: "bundle") pod "da66fae5-bc9b-49b3-8ed8-729a9f353b67" (UID: "da66fae5-bc9b-49b3-8ed8-729a9f353b67"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:15:25 crc kubenswrapper[4893]: I0128 15:15:25.961422 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da66fae5-bc9b-49b3-8ed8-729a9f353b67-kube-api-access-wpbl6" (OuterVolumeSpecName: "kube-api-access-wpbl6") pod "da66fae5-bc9b-49b3-8ed8-729a9f353b67" (UID: "da66fae5-bc9b-49b3-8ed8-729a9f353b67"). InnerVolumeSpecName "kube-api-access-wpbl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:15:25 crc kubenswrapper[4893]: I0128 15:15:25.972512 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da66fae5-bc9b-49b3-8ed8-729a9f353b67-util" (OuterVolumeSpecName: "util") pod "da66fae5-bc9b-49b3-8ed8-729a9f353b67" (UID: "da66fae5-bc9b-49b3-8ed8-729a9f353b67"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:15:26 crc kubenswrapper[4893]: I0128 15:15:26.053323 4893 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da66fae5-bc9b-49b3-8ed8-729a9f353b67-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:26 crc kubenswrapper[4893]: I0128 15:15:26.053370 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpbl6\" (UniqueName: \"kubernetes.io/projected/da66fae5-bc9b-49b3-8ed8-729a9f353b67-kube-api-access-wpbl6\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:26 crc kubenswrapper[4893]: I0128 15:15:26.053384 4893 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da66fae5-bc9b-49b3-8ed8-729a9f353b67-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:26 crc kubenswrapper[4893]: I0128 15:15:26.638159 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" Jan 28 15:15:26 crc kubenswrapper[4893]: I0128 15:15:26.639245 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn" event={"ID":"da66fae5-bc9b-49b3-8ed8-729a9f353b67","Type":"ContainerDied","Data":"ee09317cc695efd26aa05159fa193ca057d05a012e725464fdd7667c1b85bc0a"} Jan 28 15:15:26 crc kubenswrapper[4893]: I0128 15:15:26.639352 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee09317cc695efd26aa05159fa193ca057d05a012e725464fdd7667c1b85bc0a" Jan 28 15:15:26 crc kubenswrapper[4893]: I0128 15:15:26.643157 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qxps" event={"ID":"d191c46f-7ccf-4dc8-a2c8-477724435ff0","Type":"ContainerStarted","Data":"51e7fac59ae1c6e7dc75bd3a7eb47f5a310ce445591ec5b5b4ccbcc9be3e02cb"} Jan 28 15:15:26 crc kubenswrapper[4893]: I0128 15:15:26.665043 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6qxps" podStartSLOduration=2.258738085 podStartE2EDuration="4.665025947s" podCreationTimestamp="2026-01-28 15:15:22 +0000 UTC" firstStartedPulling="2026-01-28 15:15:23.606496335 +0000 UTC m=+841.380111353" lastFinishedPulling="2026-01-28 15:15:26.012784187 +0000 UTC m=+843.786399215" observedRunningTime="2026-01-28 15:15:26.659687631 +0000 UTC m=+844.433302659" watchObservedRunningTime="2026-01-28 15:15:26.665025947 +0000 UTC m=+844.438640975" Jan 28 15:15:27 crc kubenswrapper[4893]: I0128 15:15:27.945272 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-82jmt"] Jan 28 15:15:27 crc kubenswrapper[4893]: E0128 15:15:27.945778 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da66fae5-bc9b-49b3-8ed8-729a9f353b67" containerName="extract" Jan 28 15:15:27 crc kubenswrapper[4893]: I0128 15:15:27.945790 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="da66fae5-bc9b-49b3-8ed8-729a9f353b67" containerName="extract" Jan 28 15:15:27 crc kubenswrapper[4893]: E0128 15:15:27.945807 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da66fae5-bc9b-49b3-8ed8-729a9f353b67" containerName="util" Jan 28 15:15:27 crc kubenswrapper[4893]: I0128 15:15:27.945813 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="da66fae5-bc9b-49b3-8ed8-729a9f353b67" containerName="util" Jan 28 15:15:27 crc kubenswrapper[4893]: E0128 15:15:27.945842 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da66fae5-bc9b-49b3-8ed8-729a9f353b67" containerName="pull" Jan 28 15:15:27 crc kubenswrapper[4893]: I0128 15:15:27.945849 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="da66fae5-bc9b-49b3-8ed8-729a9f353b67" containerName="pull" Jan 28 15:15:27 crc kubenswrapper[4893]: I0128 15:15:27.945933 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="da66fae5-bc9b-49b3-8ed8-729a9f353b67" containerName="extract" Jan 28 15:15:27 crc kubenswrapper[4893]: I0128 15:15:27.946305 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-82jmt" Jan 28 15:15:27 crc kubenswrapper[4893]: I0128 15:15:27.949101 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2gqg5" Jan 28 15:15:27 crc kubenswrapper[4893]: I0128 15:15:27.949396 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 28 15:15:27 crc kubenswrapper[4893]: I0128 15:15:27.949603 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 28 15:15:27 crc kubenswrapper[4893]: I0128 15:15:27.954773 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-82jmt"] Jan 28 15:15:28 crc kubenswrapper[4893]: I0128 15:15:28.078978 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnjjq\" (UniqueName: \"kubernetes.io/projected/d87ded33-b86a-4245-b564-87d682532ec8-kube-api-access-jnjjq\") pod \"nmstate-operator-646758c888-82jmt\" (UID: \"d87ded33-b86a-4245-b564-87d682532ec8\") " pod="openshift-nmstate/nmstate-operator-646758c888-82jmt" Jan 28 15:15:28 crc kubenswrapper[4893]: I0128 15:15:28.180405 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnjjq\" (UniqueName: \"kubernetes.io/projected/d87ded33-b86a-4245-b564-87d682532ec8-kube-api-access-jnjjq\") pod \"nmstate-operator-646758c888-82jmt\" (UID: \"d87ded33-b86a-4245-b564-87d682532ec8\") " pod="openshift-nmstate/nmstate-operator-646758c888-82jmt" Jan 28 15:15:28 crc kubenswrapper[4893]: I0128 15:15:28.211639 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnjjq\" (UniqueName: \"kubernetes.io/projected/d87ded33-b86a-4245-b564-87d682532ec8-kube-api-access-jnjjq\") pod \"nmstate-operator-646758c888-82jmt\" (UID: \"d87ded33-b86a-4245-b564-87d682532ec8\") " pod="openshift-nmstate/nmstate-operator-646758c888-82jmt" Jan 28 15:15:28 crc kubenswrapper[4893]: I0128 15:15:28.262794 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-82jmt" Jan 28 15:15:28 crc kubenswrapper[4893]: I0128 15:15:28.462018 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-82jmt"] Jan 28 15:15:28 crc kubenswrapper[4893]: W0128 15:15:28.468257 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd87ded33_b86a_4245_b564_87d682532ec8.slice/crio-90d92c69081092ac7685ecc46fbcad268a12c5d5699bd940f8a21dfab2395786 WatchSource:0}: Error finding container 90d92c69081092ac7685ecc46fbcad268a12c5d5699bd940f8a21dfab2395786: Status 404 returned error can't find the container with id 90d92c69081092ac7685ecc46fbcad268a12c5d5699bd940f8a21dfab2395786 Jan 28 15:15:28 crc kubenswrapper[4893]: I0128 15:15:28.654445 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-82jmt" event={"ID":"d87ded33-b86a-4245-b564-87d682532ec8","Type":"ContainerStarted","Data":"90d92c69081092ac7685ecc46fbcad268a12c5d5699bd940f8a21dfab2395786"} Jan 28 15:15:30 crc kubenswrapper[4893]: I0128 15:15:30.356376 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5q54w"] Jan 28 15:15:30 crc kubenswrapper[4893]: I0128 15:15:30.357099 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovn-controller" containerID="cri-o://02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b" gracePeriod=30 Jan 28 15:15:30 crc kubenswrapper[4893]: I0128 15:15:30.357539 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f" gracePeriod=30 Jan 28 15:15:30 crc kubenswrapper[4893]: I0128 15:15:30.357571 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovn-acl-logging" containerID="cri-o://86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27" gracePeriod=30 Jan 28 15:15:30 crc kubenswrapper[4893]: I0128 15:15:30.357629 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="northd" containerID="cri-o://06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c" gracePeriod=30 Jan 28 15:15:30 crc kubenswrapper[4893]: I0128 15:15:30.357611 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="nbdb" containerID="cri-o://b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0" gracePeriod=30 Jan 28 15:15:30 crc kubenswrapper[4893]: I0128 15:15:30.357602 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="sbdb" containerID="cri-o://3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af" gracePeriod=30 Jan 28 15:15:30 crc kubenswrapper[4893]: I0128 15:15:30.357780 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="kube-rbac-proxy-node" containerID="cri-o://d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646" gracePeriod=30 Jan 28 15:15:30 crc kubenswrapper[4893]: I0128 15:15:30.402006 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" containerID="cri-o://7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305" gracePeriod=30 Jan 28 15:15:30 crc kubenswrapper[4893]: I0128 15:15:30.668435 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/3.log" Jan 28 15:15:30 crc kubenswrapper[4893]: I0128 15:15:30.671685 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovn-acl-logging/0.log" Jan 28 15:15:30 crc kubenswrapper[4893]: I0128 15:15:30.672730 4893 generic.go:334] "Generic (PLEG): container finished" podID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerID="86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27" exitCode=143 Jan 28 15:15:30 crc kubenswrapper[4893]: I0128 15:15:30.672805 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerDied","Data":"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.438734 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/3.log" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.441298 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovn-acl-logging/0.log" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.441877 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovn-controller/0.log" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.444584 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.501579 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4c84b"] Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.501808 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="sbdb" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.501826 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="sbdb" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.501839 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.501845 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.501854 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="kubecfg-setup" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.501860 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="kubecfg-setup" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.501866 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.501872 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.501884 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.501892 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.501899 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="northd" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.501904 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="northd" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.501911 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovn-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.501917 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovn-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.501925 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="nbdb" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.501930 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="nbdb" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.501938 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.501943 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.501950 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovn-acl-logging" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.501956 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovn-acl-logging" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.501964 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.501972 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.501983 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="kube-rbac-proxy-node" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.501991 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="kube-rbac-proxy-node" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.502091 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.502099 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="sbdb" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.502108 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="nbdb" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.502116 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovn-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.502125 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.502131 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="northd" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.502139 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.502147 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.502154 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="kube-rbac-proxy-node" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.502160 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovn-acl-logging" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.502167 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.502257 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.502264 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.502352 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerName="ovnkube-controller" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.504114 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.625921 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-kubelet\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.625987 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-log-socket\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626003 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-run-ovn-kubernetes\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626034 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovn-node-metrics-cert\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626056 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-openvswitch\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626111 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626118 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-log-socket" (OuterVolumeSpecName: "log-socket") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626119 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626195 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626667 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-node-log\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626710 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovnkube-config\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626728 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-run-netns\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626752 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-env-overrides\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626774 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-var-lib-openvswitch\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626831 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-ovn\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626826 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626852 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-systemd\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626858 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626881 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626919 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-var-lib-cni-networks-ovn-kubernetes\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626939 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-slash\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626999 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-slash" (OuterVolumeSpecName: "host-slash") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.626996 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627038 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovnkube-script-lib\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627063 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-cni-bin\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627098 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-etc-openvswitch\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627123 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-cni-netd\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627158 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627152 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627197 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-systemd-units\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627204 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627221 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwtf7\" (UniqueName: \"kubernetes.io/projected/135b9f51-26ac-44c4-a817-cbfa4b36ae54-kube-api-access-gwtf7\") pod \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\" (UID: \"135b9f51-26ac-44c4-a817-cbfa4b36ae54\") " Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627250 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627254 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627256 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627422 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-var-lib-openvswitch\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627447 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d321a2f-5173-43fe-877f-5659444981a3-ovn-node-metrics-cert\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627524 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627529 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-run-netns\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627555 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-node-log" (OuterVolumeSpecName: "node-log") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627584 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-cni-bin\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627608 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d321a2f-5173-43fe-877f-5659444981a3-ovnkube-script-lib\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627627 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-node-log\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627654 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-etc-openvswitch\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627683 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-cni-netd\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627729 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-slash\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627784 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-run-openvswitch\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627808 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627832 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-systemd-units\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627857 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmj5v\" (UniqueName: \"kubernetes.io/projected/6d321a2f-5173-43fe-877f-5659444981a3-kube-api-access-hmj5v\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627890 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-run-systemd\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627919 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-run-ovn\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.627998 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d321a2f-5173-43fe-877f-5659444981a3-env-overrides\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628052 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628090 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d321a2f-5173-43fe-877f-5659444981a3-ovnkube-config\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628177 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-kubelet\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628224 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-log-socket\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628328 4893 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628344 4893 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628359 4893 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-slash\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628374 4893 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628387 4893 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628401 4893 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628414 4893 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628425 4893 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628438 4893 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628449 4893 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-log-socket\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628460 4893 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628469 4893 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628637 4893 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-node-log\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628645 4893 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628653 4893 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628661 4893 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/135b9f51-26ac-44c4-a817-cbfa4b36ae54-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.628669 4893 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.631282 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.634360 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135b9f51-26ac-44c4-a817-cbfa4b36ae54-kube-api-access-gwtf7" (OuterVolumeSpecName: "kube-api-access-gwtf7") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "kube-api-access-gwtf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.641332 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "135b9f51-26ac-44c4-a817-cbfa4b36ae54" (UID: "135b9f51-26ac-44c4-a817-cbfa4b36ae54"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.679217 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krkz9_a51e5a50-969c-4f25-a895-ebb119642512/kube-multus/2.log" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.679659 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krkz9_a51e5a50-969c-4f25-a895-ebb119642512/kube-multus/1.log" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.679691 4893 generic.go:334] "Generic (PLEG): container finished" podID="a51e5a50-969c-4f25-a895-ebb119642512" containerID="70cbfe0325abc353a7d194d727a957eea71dda00452d5a048b5b50696e54c1e4" exitCode=2 Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.679734 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krkz9" event={"ID":"a51e5a50-969c-4f25-a895-ebb119642512","Type":"ContainerDied","Data":"70cbfe0325abc353a7d194d727a957eea71dda00452d5a048b5b50696e54c1e4"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.679768 4893 scope.go:117] "RemoveContainer" containerID="0c04d370bb1ae00208a3c91c24a8d3bf60236155408ef3b6b3224e20ffdd5d0b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.680233 4893 scope.go:117] "RemoveContainer" containerID="70cbfe0325abc353a7d194d727a957eea71dda00452d5a048b5b50696e54c1e4" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.685240 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovnkube-controller/3.log" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.687963 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovn-acl-logging/0.log" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.688486 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5q54w_135b9f51-26ac-44c4-a817-cbfa4b36ae54/ovn-controller/0.log" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.688896 4893 generic.go:334] "Generic (PLEG): container finished" podID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerID="7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305" exitCode=0 Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.688931 4893 generic.go:334] "Generic (PLEG): container finished" podID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerID="3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af" exitCode=0 Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.688940 4893 generic.go:334] "Generic (PLEG): container finished" podID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerID="b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0" exitCode=0 Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.688969 4893 generic.go:334] "Generic (PLEG): container finished" podID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerID="06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c" exitCode=0 Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.688979 4893 generic.go:334] "Generic (PLEG): container finished" podID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerID="e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f" exitCode=0 Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.688989 4893 generic.go:334] "Generic (PLEG): container finished" podID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerID="d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646" exitCode=0 Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689000 4893 generic.go:334] "Generic (PLEG): container finished" podID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" containerID="02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b" exitCode=143 Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689008 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689044 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerDied","Data":"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689100 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerDied","Data":"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689119 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerDied","Data":"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689242 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerDied","Data":"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689386 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerDied","Data":"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689407 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerDied","Data":"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689421 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689437 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689443 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689450 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689458 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689466 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689491 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689497 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689503 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689509 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689518 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerDied","Data":"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689529 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689540 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689547 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689553 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689560 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689566 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689572 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689578 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689584 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689592 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689602 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5q54w" event={"ID":"135b9f51-26ac-44c4-a817-cbfa4b36ae54","Type":"ContainerDied","Data":"2620d2575a3a7001dc1d2d5fa4b7c024a4805b9ccb7bdca328b505b9cd1f7991"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689614 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689622 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689629 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689636 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689643 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689652 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689658 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689665 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689671 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.689680 4893 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.690628 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-82jmt" event={"ID":"d87ded33-b86a-4245-b564-87d682532ec8","Type":"ContainerStarted","Data":"485c92ffb9bfba2b693ad43402f62055e410566a8245a6e716e2cc07d6b8d1b9"} Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.721434 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-82jmt" podStartSLOduration=1.689684365 podStartE2EDuration="4.721411132s" podCreationTimestamp="2026-01-28 15:15:27 +0000 UTC" firstStartedPulling="2026-01-28 15:15:28.470589775 +0000 UTC m=+846.244204803" lastFinishedPulling="2026-01-28 15:15:31.502316542 +0000 UTC m=+849.275931570" observedRunningTime="2026-01-28 15:15:31.721213556 +0000 UTC m=+849.494828604" watchObservedRunningTime="2026-01-28 15:15:31.721411132 +0000 UTC m=+849.495026160" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.726959 4893 scope.go:117] "RemoveContainer" containerID="7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.729635 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-slash\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.729709 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-run-openvswitch\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.729734 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-slash\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.729750 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.729797 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-systemd-units\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.729830 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-run-openvswitch\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.729831 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmj5v\" (UniqueName: \"kubernetes.io/projected/6d321a2f-5173-43fe-877f-5659444981a3-kube-api-access-hmj5v\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.729908 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-run-systemd\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.729933 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-systemd-units\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.729807 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.729959 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-run-ovn\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.729967 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-run-systemd\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.729995 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-run-ovn\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730014 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d321a2f-5173-43fe-877f-5659444981a3-env-overrides\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730045 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730066 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d321a2f-5173-43fe-877f-5659444981a3-ovnkube-config\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730090 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-kubelet\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730106 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-log-socket\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730143 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-var-lib-openvswitch\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730160 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d321a2f-5173-43fe-877f-5659444981a3-ovn-node-metrics-cert\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730177 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-run-netns\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730194 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-node-log\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730210 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-cni-bin\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730227 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d321a2f-5173-43fe-877f-5659444981a3-ovnkube-script-lib\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730253 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-etc-openvswitch\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730270 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-cni-netd\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730316 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-cni-netd\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730321 4893 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/135b9f51-26ac-44c4-a817-cbfa4b36ae54-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730343 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwtf7\" (UniqueName: \"kubernetes.io/projected/135b9f51-26ac-44c4-a817-cbfa4b36ae54-kube-api-access-gwtf7\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730358 4893 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/135b9f51-26ac-44c4-a817-cbfa4b36ae54-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730342 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730889 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6d321a2f-5173-43fe-877f-5659444981a3-ovnkube-config\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730928 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-kubelet\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.730951 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-log-socket\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.731168 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-node-log\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.731232 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-var-lib-openvswitch\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.731599 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-cni-bin\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.732093 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6d321a2f-5173-43fe-877f-5659444981a3-ovnkube-script-lib\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.732127 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-etc-openvswitch\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.732151 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6d321a2f-5173-43fe-877f-5659444981a3-host-run-netns\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.732776 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6d321a2f-5173-43fe-877f-5659444981a3-env-overrides\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.738543 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5q54w"] Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.744005 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6d321a2f-5173-43fe-877f-5659444981a3-ovn-node-metrics-cert\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.760991 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5q54w"] Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.761065 4893 scope.go:117] "RemoveContainer" containerID="2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.766469 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmj5v\" (UniqueName: \"kubernetes.io/projected/6d321a2f-5173-43fe-877f-5659444981a3-kube-api-access-hmj5v\") pod \"ovnkube-node-4c84b\" (UID: \"6d321a2f-5173-43fe-877f-5659444981a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.780326 4893 scope.go:117] "RemoveContainer" containerID="3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.804060 4893 scope.go:117] "RemoveContainer" containerID="b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.824380 4893 scope.go:117] "RemoveContainer" containerID="06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.825300 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.858103 4893 scope.go:117] "RemoveContainer" containerID="e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.873886 4893 scope.go:117] "RemoveContainer" containerID="d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.899749 4893 scope.go:117] "RemoveContainer" containerID="86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.923049 4893 scope.go:117] "RemoveContainer" containerID="02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.938151 4893 scope.go:117] "RemoveContainer" containerID="c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.957364 4893 scope.go:117] "RemoveContainer" containerID="7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.957846 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305\": container with ID starting with 7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305 not found: ID does not exist" containerID="7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.957944 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305"} err="failed to get container status \"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305\": rpc error: code = NotFound desc = could not find container \"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305\": container with ID starting with 7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.958036 4893 scope.go:117] "RemoveContainer" containerID="2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.958450 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\": container with ID starting with 2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0 not found: ID does not exist" containerID="2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.958553 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0"} err="failed to get container status \"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\": rpc error: code = NotFound desc = could not find container \"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\": container with ID starting with 2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.958626 4893 scope.go:117] "RemoveContainer" containerID="3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.959021 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\": container with ID starting with 3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af not found: ID does not exist" containerID="3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.959066 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af"} err="failed to get container status \"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\": rpc error: code = NotFound desc = could not find container \"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\": container with ID starting with 3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.959104 4893 scope.go:117] "RemoveContainer" containerID="b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.959425 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\": container with ID starting with b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0 not found: ID does not exist" containerID="b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.959447 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0"} err="failed to get container status \"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\": rpc error: code = NotFound desc = could not find container \"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\": container with ID starting with b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.959461 4893 scope.go:117] "RemoveContainer" containerID="06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.959746 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\": container with ID starting with 06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c not found: ID does not exist" containerID="06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.959865 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c"} err="failed to get container status \"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\": rpc error: code = NotFound desc = could not find container \"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\": container with ID starting with 06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.959961 4893 scope.go:117] "RemoveContainer" containerID="e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.960315 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\": container with ID starting with e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f not found: ID does not exist" containerID="e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.960402 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f"} err="failed to get container status \"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\": rpc error: code = NotFound desc = could not find container \"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\": container with ID starting with e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.960466 4893 scope.go:117] "RemoveContainer" containerID="d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.960767 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\": container with ID starting with d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646 not found: ID does not exist" containerID="d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.960844 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646"} err="failed to get container status \"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\": rpc error: code = NotFound desc = could not find container \"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\": container with ID starting with d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.960911 4893 scope.go:117] "RemoveContainer" containerID="86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.962931 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\": container with ID starting with 86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27 not found: ID does not exist" containerID="86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.963924 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27"} err="failed to get container status \"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\": rpc error: code = NotFound desc = could not find container \"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\": container with ID starting with 86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.964049 4893 scope.go:117] "RemoveContainer" containerID="02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.964561 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\": container with ID starting with 02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b not found: ID does not exist" containerID="02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.964595 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b"} err="failed to get container status \"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\": rpc error: code = NotFound desc = could not find container \"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\": container with ID starting with 02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.964620 4893 scope.go:117] "RemoveContainer" containerID="c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654" Jan 28 15:15:31 crc kubenswrapper[4893]: E0128 15:15:31.965494 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\": container with ID starting with c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654 not found: ID does not exist" containerID="c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.965593 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654"} err="failed to get container status \"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\": rpc error: code = NotFound desc = could not find container \"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\": container with ID starting with c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.965659 4893 scope.go:117] "RemoveContainer" containerID="7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.966122 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305"} err="failed to get container status \"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305\": rpc error: code = NotFound desc = could not find container \"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305\": container with ID starting with 7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.966214 4893 scope.go:117] "RemoveContainer" containerID="2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.966545 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0"} err="failed to get container status \"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\": rpc error: code = NotFound desc = could not find container \"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\": container with ID starting with 2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.966658 4893 scope.go:117] "RemoveContainer" containerID="3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.967103 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af"} err="failed to get container status \"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\": rpc error: code = NotFound desc = could not find container \"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\": container with ID starting with 3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.967214 4893 scope.go:117] "RemoveContainer" containerID="b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.967554 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0"} err="failed to get container status \"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\": rpc error: code = NotFound desc = could not find container \"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\": container with ID starting with b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.967579 4893 scope.go:117] "RemoveContainer" containerID="06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.967834 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c"} err="failed to get container status \"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\": rpc error: code = NotFound desc = could not find container \"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\": container with ID starting with 06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.967910 4893 scope.go:117] "RemoveContainer" containerID="e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.968344 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f"} err="failed to get container status \"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\": rpc error: code = NotFound desc = could not find container \"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\": container with ID starting with e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.968372 4893 scope.go:117] "RemoveContainer" containerID="d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.968864 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646"} err="failed to get container status \"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\": rpc error: code = NotFound desc = could not find container \"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\": container with ID starting with d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.968945 4893 scope.go:117] "RemoveContainer" containerID="86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.969281 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27"} err="failed to get container status \"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\": rpc error: code = NotFound desc = could not find container \"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\": container with ID starting with 86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.969359 4893 scope.go:117] "RemoveContainer" containerID="02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.969851 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b"} err="failed to get container status \"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\": rpc error: code = NotFound desc = could not find container \"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\": container with ID starting with 02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.969914 4893 scope.go:117] "RemoveContainer" containerID="c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.970234 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654"} err="failed to get container status \"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\": rpc error: code = NotFound desc = could not find container \"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\": container with ID starting with c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.970260 4893 scope.go:117] "RemoveContainer" containerID="7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.970717 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305"} err="failed to get container status \"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305\": rpc error: code = NotFound desc = could not find container \"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305\": container with ID starting with 7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.970742 4893 scope.go:117] "RemoveContainer" containerID="2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.971019 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0"} err="failed to get container status \"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\": rpc error: code = NotFound desc = could not find container \"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\": container with ID starting with 2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.971126 4893 scope.go:117] "RemoveContainer" containerID="3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.972755 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af"} err="failed to get container status \"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\": rpc error: code = NotFound desc = could not find container \"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\": container with ID starting with 3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.972967 4893 scope.go:117] "RemoveContainer" containerID="b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.973369 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0"} err="failed to get container status \"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\": rpc error: code = NotFound desc = could not find container \"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\": container with ID starting with b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.973420 4893 scope.go:117] "RemoveContainer" containerID="06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.973774 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c"} err="failed to get container status \"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\": rpc error: code = NotFound desc = could not find container \"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\": container with ID starting with 06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.973818 4893 scope.go:117] "RemoveContainer" containerID="e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.975256 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f"} err="failed to get container status \"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\": rpc error: code = NotFound desc = could not find container \"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\": container with ID starting with e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.975285 4893 scope.go:117] "RemoveContainer" containerID="d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.975613 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646"} err="failed to get container status \"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\": rpc error: code = NotFound desc = could not find container \"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\": container with ID starting with d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.975641 4893 scope.go:117] "RemoveContainer" containerID="86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.976247 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27"} err="failed to get container status \"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\": rpc error: code = NotFound desc = could not find container \"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\": container with ID starting with 86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.976305 4893 scope.go:117] "RemoveContainer" containerID="02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.976724 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b"} err="failed to get container status \"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\": rpc error: code = NotFound desc = could not find container \"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\": container with ID starting with 02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.976755 4893 scope.go:117] "RemoveContainer" containerID="c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.977139 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654"} err="failed to get container status \"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\": rpc error: code = NotFound desc = could not find container \"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\": container with ID starting with c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.977181 4893 scope.go:117] "RemoveContainer" containerID="7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.977508 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305"} err="failed to get container status \"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305\": rpc error: code = NotFound desc = could not find container \"7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305\": container with ID starting with 7c178b68ceade4ded570d0e9100f5ded1d3289011a86171c03585221df13a305 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.977552 4893 scope.go:117] "RemoveContainer" containerID="2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.977841 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0"} err="failed to get container status \"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\": rpc error: code = NotFound desc = could not find container \"2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0\": container with ID starting with 2aa04784d8862f3c66c65e4dbfbecbce2f777407943f867a58d79e2975bec7d0 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.977870 4893 scope.go:117] "RemoveContainer" containerID="3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.978172 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af"} err="failed to get container status \"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\": rpc error: code = NotFound desc = could not find container \"3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af\": container with ID starting with 3a795070115deec6ab8940d5e940ac9f2747e8db96735ab5ba3b600b453f28af not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.978192 4893 scope.go:117] "RemoveContainer" containerID="b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.978645 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0"} err="failed to get container status \"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\": rpc error: code = NotFound desc = could not find container \"b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0\": container with ID starting with b52efdda96d2f609c5767a205d61c30edd0a4f48c847d3103952c432e7c186d0 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.978670 4893 scope.go:117] "RemoveContainer" containerID="06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.979073 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c"} err="failed to get container status \"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\": rpc error: code = NotFound desc = could not find container \"06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c\": container with ID starting with 06ba67ee1cc1cb287ed10ae5d27beb10bed6f9fffc1773627efabbd7a643f92c not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.979154 4893 scope.go:117] "RemoveContainer" containerID="e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.979817 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f"} err="failed to get container status \"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\": rpc error: code = NotFound desc = could not find container \"e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f\": container with ID starting with e13f6ca19d010a934ae29c26b516ceab8662465b57883fae4f07757459510c9f not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.979849 4893 scope.go:117] "RemoveContainer" containerID="d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.980166 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646"} err="failed to get container status \"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\": rpc error: code = NotFound desc = could not find container \"d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646\": container with ID starting with d896e689bd0c4f1b4ea1718af2a7cddc5cad194465ee1e953ec1a448047bb646 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.980182 4893 scope.go:117] "RemoveContainer" containerID="86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.980997 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27"} err="failed to get container status \"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\": rpc error: code = NotFound desc = could not find container \"86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27\": container with ID starting with 86d3ab3437d178f4f14acf261f82632ff8000026d38e3cfdc5a6a576aaf20a27 not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.981020 4893 scope.go:117] "RemoveContainer" containerID="02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.981372 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b"} err="failed to get container status \"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\": rpc error: code = NotFound desc = could not find container \"02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b\": container with ID starting with 02c0641cc33efa77185a78bce06fb7d3098a3ce20743814f39064680d31d113b not found: ID does not exist" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.981415 4893 scope.go:117] "RemoveContainer" containerID="c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654" Jan 28 15:15:31 crc kubenswrapper[4893]: I0128 15:15:31.981791 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654"} err="failed to get container status \"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\": rpc error: code = NotFound desc = could not find container \"c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654\": container with ID starting with c9274c62c624cf98e986b92fab0d7180cc877b669ed754d911cee7592666d654 not found: ID does not exist" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.661693 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ft8jl"] Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.663003 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.665271 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-g5sd4" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.674424 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk"] Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.675824 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.678050 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.699063 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2dw5k"] Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.699928 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.701267 4893 generic.go:334] "Generic (PLEG): container finished" podID="6d321a2f-5173-43fe-877f-5659444981a3" containerID="5d94b557f4d464db9b48adc6b2dadbba05506dad87e2446396ac6c0333dc3dd7" exitCode=0 Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.701359 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" event={"ID":"6d321a2f-5173-43fe-877f-5659444981a3","Type":"ContainerDied","Data":"5d94b557f4d464db9b48adc6b2dadbba05506dad87e2446396ac6c0333dc3dd7"} Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.701449 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" event={"ID":"6d321a2f-5173-43fe-877f-5659444981a3","Type":"ContainerStarted","Data":"67b823a344b1602029047a747091ee3db85784f91559a8e6accdb1429e569f35"} Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.706864 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-krkz9_a51e5a50-969c-4f25-a895-ebb119642512/kube-multus/2.log" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.707425 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-krkz9" event={"ID":"a51e5a50-969c-4f25-a895-ebb119642512","Type":"ContainerStarted","Data":"329a013c985a588d7619901331d4c6b218957891411406d2d28da5db47a0c520"} Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.840386 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw"] Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.841368 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.849530 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.849995 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-6l4r6" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.850068 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.853231 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmlmf\" (UniqueName: \"kubernetes.io/projected/caa645f7-f683-4bda-851a-91732a41d8fc-kube-api-access-rmlmf\") pod \"nmstate-metrics-54757c584b-ft8jl\" (UID: \"caa645f7-f683-4bda-851a-91732a41d8fc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.853349 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8dcdd494-a746-4e9f-89ad-da96e2b2ab17-nmstate-lock\") pod \"nmstate-handler-2dw5k\" (UID: \"8dcdd494-a746-4e9f-89ad-da96e2b2ab17\") " pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.853379 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whrsx\" (UniqueName: \"kubernetes.io/projected/8a3c7538-e078-4e89-b34d-dd128942e19d-kube-api-access-whrsx\") pod \"nmstate-console-plugin-7754f76f8b-shqgw\" (UID: \"8a3c7538-e078-4e89-b34d-dd128942e19d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.853412 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a3c7538-e078-4e89-b34d-dd128942e19d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-shqgw\" (UID: \"8a3c7538-e078-4e89-b34d-dd128942e19d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.853436 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8dcdd494-a746-4e9f-89ad-da96e2b2ab17-dbus-socket\") pod \"nmstate-handler-2dw5k\" (UID: \"8dcdd494-a746-4e9f-89ad-da96e2b2ab17\") " pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.854684 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8a3c7538-e078-4e89-b34d-dd128942e19d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-shqgw\" (UID: \"8a3c7538-e078-4e89-b34d-dd128942e19d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.854742 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8dcdd494-a746-4e9f-89ad-da96e2b2ab17-ovs-socket\") pod \"nmstate-handler-2dw5k\" (UID: \"8dcdd494-a746-4e9f-89ad-da96e2b2ab17\") " pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.854796 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25qgw\" (UniqueName: \"kubernetes.io/projected/e3a5ef47-65ea-4135-ad67-c83b0aa175f4-kube-api-access-25qgw\") pod \"nmstate-webhook-8474b5b9d8-rprnk\" (UID: \"e3a5ef47-65ea-4135-ad67-c83b0aa175f4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.854880 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e3a5ef47-65ea-4135-ad67-c83b0aa175f4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rprnk\" (UID: \"e3a5ef47-65ea-4135-ad67-c83b0aa175f4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.854979 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8k2f\" (UniqueName: \"kubernetes.io/projected/8dcdd494-a746-4e9f-89ad-da96e2b2ab17-kube-api-access-h8k2f\") pod \"nmstate-handler-2dw5k\" (UID: \"8dcdd494-a746-4e9f-89ad-da96e2b2ab17\") " pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.898865 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="135b9f51-26ac-44c4-a817-cbfa4b36ae54" path="/var/lib/kubelet/pods/135b9f51-26ac-44c4-a817-cbfa4b36ae54/volumes" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.956117 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e3a5ef47-65ea-4135-ad67-c83b0aa175f4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rprnk\" (UID: \"e3a5ef47-65ea-4135-ad67-c83b0aa175f4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.956432 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8k2f\" (UniqueName: \"kubernetes.io/projected/8dcdd494-a746-4e9f-89ad-da96e2b2ab17-kube-api-access-h8k2f\") pod \"nmstate-handler-2dw5k\" (UID: \"8dcdd494-a746-4e9f-89ad-da96e2b2ab17\") " pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.956498 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmlmf\" (UniqueName: \"kubernetes.io/projected/caa645f7-f683-4bda-851a-91732a41d8fc-kube-api-access-rmlmf\") pod \"nmstate-metrics-54757c584b-ft8jl\" (UID: \"caa645f7-f683-4bda-851a-91732a41d8fc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.956538 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8dcdd494-a746-4e9f-89ad-da96e2b2ab17-nmstate-lock\") pod \"nmstate-handler-2dw5k\" (UID: \"8dcdd494-a746-4e9f-89ad-da96e2b2ab17\") " pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.956555 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whrsx\" (UniqueName: \"kubernetes.io/projected/8a3c7538-e078-4e89-b34d-dd128942e19d-kube-api-access-whrsx\") pod \"nmstate-console-plugin-7754f76f8b-shqgw\" (UID: \"8a3c7538-e078-4e89-b34d-dd128942e19d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.956572 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a3c7538-e078-4e89-b34d-dd128942e19d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-shqgw\" (UID: \"8a3c7538-e078-4e89-b34d-dd128942e19d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.956590 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8dcdd494-a746-4e9f-89ad-da96e2b2ab17-dbus-socket\") pod \"nmstate-handler-2dw5k\" (UID: \"8dcdd494-a746-4e9f-89ad-da96e2b2ab17\") " pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.956615 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8a3c7538-e078-4e89-b34d-dd128942e19d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-shqgw\" (UID: \"8a3c7538-e078-4e89-b34d-dd128942e19d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.956634 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8dcdd494-a746-4e9f-89ad-da96e2b2ab17-ovs-socket\") pod \"nmstate-handler-2dw5k\" (UID: \"8dcdd494-a746-4e9f-89ad-da96e2b2ab17\") " pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.956662 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25qgw\" (UniqueName: \"kubernetes.io/projected/e3a5ef47-65ea-4135-ad67-c83b0aa175f4-kube-api-access-25qgw\") pod \"nmstate-webhook-8474b5b9d8-rprnk\" (UID: \"e3a5ef47-65ea-4135-ad67-c83b0aa175f4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:32 crc kubenswrapper[4893]: E0128 15:15:32.961116 4893 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 28 15:15:32 crc kubenswrapper[4893]: E0128 15:15:32.961211 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a3c7538-e078-4e89-b34d-dd128942e19d-plugin-serving-cert podName:8a3c7538-e078-4e89-b34d-dd128942e19d nodeName:}" failed. No retries permitted until 2026-01-28 15:15:33.461188344 +0000 UTC m=+851.234803372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/8a3c7538-e078-4e89-b34d-dd128942e19d-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-shqgw" (UID: "8a3c7538-e078-4e89-b34d-dd128942e19d") : secret "plugin-serving-cert" not found Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.961237 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8dcdd494-a746-4e9f-89ad-da96e2b2ab17-ovs-socket\") pod \"nmstate-handler-2dw5k\" (UID: \"8dcdd494-a746-4e9f-89ad-da96e2b2ab17\") " pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.961127 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8dcdd494-a746-4e9f-89ad-da96e2b2ab17-nmstate-lock\") pod \"nmstate-handler-2dw5k\" (UID: \"8dcdd494-a746-4e9f-89ad-da96e2b2ab17\") " pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.961457 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8dcdd494-a746-4e9f-89ad-da96e2b2ab17-dbus-socket\") pod \"nmstate-handler-2dw5k\" (UID: \"8dcdd494-a746-4e9f-89ad-da96e2b2ab17\") " pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.962057 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8a3c7538-e078-4e89-b34d-dd128942e19d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-shqgw\" (UID: \"8a3c7538-e078-4e89-b34d-dd128942e19d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.978759 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e3a5ef47-65ea-4135-ad67-c83b0aa175f4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rprnk\" (UID: \"e3a5ef47-65ea-4135-ad67-c83b0aa175f4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.980369 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25qgw\" (UniqueName: \"kubernetes.io/projected/e3a5ef47-65ea-4135-ad67-c83b0aa175f4-kube-api-access-25qgw\") pod \"nmstate-webhook-8474b5b9d8-rprnk\" (UID: \"e3a5ef47-65ea-4135-ad67-c83b0aa175f4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.980532 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whrsx\" (UniqueName: \"kubernetes.io/projected/8a3c7538-e078-4e89-b34d-dd128942e19d-kube-api-access-whrsx\") pod \"nmstate-console-plugin-7754f76f8b-shqgw\" (UID: \"8a3c7538-e078-4e89-b34d-dd128942e19d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.980627 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.980658 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.982466 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8k2f\" (UniqueName: \"kubernetes.io/projected/8dcdd494-a746-4e9f-89ad-da96e2b2ab17-kube-api-access-h8k2f\") pod \"nmstate-handler-2dw5k\" (UID: \"8dcdd494-a746-4e9f-89ad-da96e2b2ab17\") " pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.993593 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:32 crc kubenswrapper[4893]: I0128 15:15:32.995403 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmlmf\" (UniqueName: \"kubernetes.io/projected/caa645f7-f683-4bda-851a-91732a41d8fc-kube-api-access-rmlmf\") pod \"nmstate-metrics-54757c584b-ft8jl\" (UID: \"caa645f7-f683-4bda-851a-91732a41d8fc\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.016120 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.033712 4893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-rprnk_openshift-nmstate_e3a5ef47-65ea-4135-ad67-c83b0aa175f4_0(4888394d3ebffa4b351b791359a343f6a7fc7fb247446ca15c2795bed77b402f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.033802 4893 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-rprnk_openshift-nmstate_e3a5ef47-65ea-4135-ad67-c83b0aa175f4_0(4888394d3ebffa4b351b791359a343f6a7fc7fb247446ca15c2795bed77b402f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.033826 4893 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-rprnk_openshift-nmstate_e3a5ef47-65ea-4135-ad67-c83b0aa175f4_0(4888394d3ebffa4b351b791359a343f6a7fc7fb247446ca15c2795bed77b402f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.033880 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-webhook-8474b5b9d8-rprnk_openshift-nmstate(e3a5ef47-65ea-4135-ad67-c83b0aa175f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-webhook-8474b5b9d8-rprnk_openshift-nmstate(e3a5ef47-65ea-4135-ad67-c83b0aa175f4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-rprnk_openshift-nmstate_e3a5ef47-65ea-4135-ad67-c83b0aa175f4_0(4888394d3ebffa4b351b791359a343f6a7fc7fb247446ca15c2795bed77b402f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" podUID="e3a5ef47-65ea-4135-ad67-c83b0aa175f4" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.037594 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.053743 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-595f694657-cvf56"] Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.057884 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.158328 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ef9f2441-625b-46c4-8597-2ff7fc781dd0-console-oauth-config\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.158399 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svmbl\" (UniqueName: \"kubernetes.io/projected/ef9f2441-625b-46c4-8597-2ff7fc781dd0-kube-api-access-svmbl\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.158467 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9f2441-625b-46c4-8597-2ff7fc781dd0-console-serving-cert\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.158529 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef9f2441-625b-46c4-8597-2ff7fc781dd0-trusted-ca-bundle\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.158698 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ef9f2441-625b-46c4-8597-2ff7fc781dd0-console-config\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.158760 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ef9f2441-625b-46c4-8597-2ff7fc781dd0-oauth-serving-cert\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.158823 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ef9f2441-625b-46c4-8597-2ff7fc781dd0-service-ca\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.259320 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef9f2441-625b-46c4-8597-2ff7fc781dd0-trusted-ca-bundle\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.259430 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ef9f2441-625b-46c4-8597-2ff7fc781dd0-console-config\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.259458 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ef9f2441-625b-46c4-8597-2ff7fc781dd0-oauth-serving-cert\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.259515 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ef9f2441-625b-46c4-8597-2ff7fc781dd0-service-ca\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.259546 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ef9f2441-625b-46c4-8597-2ff7fc781dd0-console-oauth-config\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.259577 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svmbl\" (UniqueName: \"kubernetes.io/projected/ef9f2441-625b-46c4-8597-2ff7fc781dd0-kube-api-access-svmbl\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.259607 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9f2441-625b-46c4-8597-2ff7fc781dd0-console-serving-cert\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.260410 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ef9f2441-625b-46c4-8597-2ff7fc781dd0-oauth-serving-cert\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.260568 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ef9f2441-625b-46c4-8597-2ff7fc781dd0-console-config\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.260630 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ef9f2441-625b-46c4-8597-2ff7fc781dd0-service-ca\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.261186 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef9f2441-625b-46c4-8597-2ff7fc781dd0-trusted-ca-bundle\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.265326 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ef9f2441-625b-46c4-8597-2ff7fc781dd0-console-oauth-config\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.265529 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef9f2441-625b-46c4-8597-2ff7fc781dd0-console-serving-cert\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.279299 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.281120 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svmbl\" (UniqueName: \"kubernetes.io/projected/ef9f2441-625b-46c4-8597-2ff7fc781dd0-kube-api-access-svmbl\") pod \"console-595f694657-cvf56\" (UID: \"ef9f2441-625b-46c4-8597-2ff7fc781dd0\") " pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.303245 4893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-ft8jl_openshift-nmstate_caa645f7-f683-4bda-851a-91732a41d8fc_0(ff746b84a005f9022b166ad925b17d3f9dfe90ec86023c4acf6b0684af1301e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.303356 4893 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-ft8jl_openshift-nmstate_caa645f7-f683-4bda-851a-91732a41d8fc_0(ff746b84a005f9022b166ad925b17d3f9dfe90ec86023c4acf6b0684af1301e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.303402 4893 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-ft8jl_openshift-nmstate_caa645f7-f683-4bda-851a-91732a41d8fc_0(ff746b84a005f9022b166ad925b17d3f9dfe90ec86023c4acf6b0684af1301e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.303495 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-metrics-54757c584b-ft8jl_openshift-nmstate(caa645f7-f683-4bda-851a-91732a41d8fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-metrics-54757c584b-ft8jl_openshift-nmstate(caa645f7-f683-4bda-851a-91732a41d8fc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-ft8jl_openshift-nmstate_caa645f7-f683-4bda-851a-91732a41d8fc_0(ff746b84a005f9022b166ad925b17d3f9dfe90ec86023c4acf6b0684af1301e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" podUID="caa645f7-f683-4bda-851a-91732a41d8fc" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.391508 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.414511 4893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-595f694657-cvf56_openshift-console_ef9f2441-625b-46c4-8597-2ff7fc781dd0_0(e233b8a1f261668e60c0459e263299c4db9d4e72f500659f028d920814172503): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.414565 4893 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-595f694657-cvf56_openshift-console_ef9f2441-625b-46c4-8597-2ff7fc781dd0_0(e233b8a1f261668e60c0459e263299c4db9d4e72f500659f028d920814172503): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.414588 4893 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-595f694657-cvf56_openshift-console_ef9f2441-625b-46c4-8597-2ff7fc781dd0_0(e233b8a1f261668e60c0459e263299c4db9d4e72f500659f028d920814172503): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.414626 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"console-595f694657-cvf56_openshift-console(ef9f2441-625b-46c4-8597-2ff7fc781dd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"console-595f694657-cvf56_openshift-console(ef9f2441-625b-46c4-8597-2ff7fc781dd0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-595f694657-cvf56_openshift-console_ef9f2441-625b-46c4-8597-2ff7fc781dd0_0(e233b8a1f261668e60c0459e263299c4db9d4e72f500659f028d920814172503): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-console/console-595f694657-cvf56" podUID="ef9f2441-625b-46c4-8597-2ff7fc781dd0" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.562238 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a3c7538-e078-4e89-b34d-dd128942e19d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-shqgw\" (UID: \"8a3c7538-e078-4e89-b34d-dd128942e19d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.566212 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a3c7538-e078-4e89-b34d-dd128942e19d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-shqgw\" (UID: \"8a3c7538-e078-4e89-b34d-dd128942e19d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.713346 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2dw5k" event={"ID":"8dcdd494-a746-4e9f-89ad-da96e2b2ab17","Type":"ContainerStarted","Data":"2e7d38fec3c098405cc3980aa21b5f146b5aa87e7b8077829aef62e4d256f6e7"} Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.718564 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" event={"ID":"6d321a2f-5173-43fe-877f-5659444981a3","Type":"ContainerStarted","Data":"2ecfefcd951c253a7c3e3ce585b07e859ef1379e87cf0df5a187f280f7bc705b"} Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.718600 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" event={"ID":"6d321a2f-5173-43fe-877f-5659444981a3","Type":"ContainerStarted","Data":"f14359739e84d655176e0ea7bcaca68f82f5255ae9cb2ac65e39b97fcbec1603"} Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.718611 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" event={"ID":"6d321a2f-5173-43fe-877f-5659444981a3","Type":"ContainerStarted","Data":"4d1b3d607bf24af5bb3aa46c11901f50354c85a6650b87f1eca0461dee2a36b3"} Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.718629 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" event={"ID":"6d321a2f-5173-43fe-877f-5659444981a3","Type":"ContainerStarted","Data":"f4bca2cc6433061f3dbb84775f6a13f86e587d69a735a42ddeb93fc2e1c2d9e1"} Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.718640 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" event={"ID":"6d321a2f-5173-43fe-877f-5659444981a3","Type":"ContainerStarted","Data":"956efad7f8596f1ad7ccb35aacd1b232672222b2da6256d68485b639071ed977"} Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.718652 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" event={"ID":"6d321a2f-5173-43fe-877f-5659444981a3","Type":"ContainerStarted","Data":"629ad6e1544021dc0e4ad22012214576967b72c4bd2238e550094e8f1cbc4b11"} Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.761114 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:33 crc kubenswrapper[4893]: I0128 15:15:33.767062 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.789917 4893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-shqgw_openshift-nmstate_8a3c7538-e078-4e89-b34d-dd128942e19d_0(3f57c71f42d785b47499dcca4b644affeb469f08ec38af95ad2859c518a2b49d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.790012 4893 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-shqgw_openshift-nmstate_8a3c7538-e078-4e89-b34d-dd128942e19d_0(3f57c71f42d785b47499dcca4b644affeb469f08ec38af95ad2859c518a2b49d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.790033 4893 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-shqgw_openshift-nmstate_8a3c7538-e078-4e89-b34d-dd128942e19d_0(3f57c71f42d785b47499dcca4b644affeb469f08ec38af95ad2859c518a2b49d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:33 crc kubenswrapper[4893]: E0128 15:15:33.790075 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-console-plugin-7754f76f8b-shqgw_openshift-nmstate(8a3c7538-e078-4e89-b34d-dd128942e19d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-console-plugin-7754f76f8b-shqgw_openshift-nmstate(8a3c7538-e078-4e89-b34d-dd128942e19d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-shqgw_openshift-nmstate_8a3c7538-e078-4e89-b34d-dd128942e19d_0(3f57c71f42d785b47499dcca4b644affeb469f08ec38af95ad2859c518a2b49d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" podUID="8a3c7538-e078-4e89-b34d-dd128942e19d" Jan 28 15:15:35 crc kubenswrapper[4893]: I0128 15:15:35.451447 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6qxps"] Jan 28 15:15:35 crc kubenswrapper[4893]: I0128 15:15:35.723060 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:15:35 crc kubenswrapper[4893]: I0128 15:15:35.723436 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:15:35 crc kubenswrapper[4893]: I0128 15:15:35.723537 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:15:35 crc kubenswrapper[4893]: I0128 15:15:35.724218 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ced3cae3c54b613ab0ce9fe21bab2e1babeff0d0bb895261d140f95238422f3"} pod="openshift-machine-config-operator/machine-config-daemon-l2nht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:15:35 crc kubenswrapper[4893]: I0128 15:15:35.724294 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" containerID="cri-o://3ced3cae3c54b613ab0ce9fe21bab2e1babeff0d0bb895261d140f95238422f3" gracePeriod=600 Jan 28 15:15:35 crc kubenswrapper[4893]: I0128 15:15:35.730524 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6qxps" podUID="d191c46f-7ccf-4dc8-a2c8-477724435ff0" containerName="registry-server" containerID="cri-o://51e7fac59ae1c6e7dc75bd3a7eb47f5a310ce445591ec5b5b4ccbcc9be3e02cb" gracePeriod=2 Jan 28 15:15:35 crc kubenswrapper[4893]: I0128 15:15:35.730924 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2dw5k" event={"ID":"8dcdd494-a746-4e9f-89ad-da96e2b2ab17","Type":"ContainerStarted","Data":"6402273a019a66c4ce9d9c01cdc96712397e4e45819c8f911f52cdc807de7117"} Jan 28 15:15:35 crc kubenswrapper[4893]: I0128 15:15:35.731015 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:35 crc kubenswrapper[4893]: I0128 15:15:35.751293 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2dw5k" podStartSLOduration=1.552381795 podStartE2EDuration="3.751273495s" podCreationTimestamp="2026-01-28 15:15:32 +0000 UTC" firstStartedPulling="2026-01-28 15:15:33.066344499 +0000 UTC m=+850.839959527" lastFinishedPulling="2026-01-28 15:15:35.265236199 +0000 UTC m=+853.038851227" observedRunningTime="2026-01-28 15:15:35.747618136 +0000 UTC m=+853.521233174" watchObservedRunningTime="2026-01-28 15:15:35.751273495 +0000 UTC m=+853.524888523" Jan 28 15:15:36 crc kubenswrapper[4893]: I0128 15:15:36.738942 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" event={"ID":"6d321a2f-5173-43fe-877f-5659444981a3","Type":"ContainerStarted","Data":"46f4f60bbd73865caf472388bf8f229f55128101465cd2d872bd9f11d9b8786c"} Jan 28 15:15:37 crc kubenswrapper[4893]: I0128 15:15:37.755411 4893 generic.go:334] "Generic (PLEG): container finished" podID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerID="3ced3cae3c54b613ab0ce9fe21bab2e1babeff0d0bb895261d140f95238422f3" exitCode=0 Jan 28 15:15:37 crc kubenswrapper[4893]: I0128 15:15:37.755492 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerDied","Data":"3ced3cae3c54b613ab0ce9fe21bab2e1babeff0d0bb895261d140f95238422f3"} Jan 28 15:15:37 crc kubenswrapper[4893]: I0128 15:15:37.755532 4893 scope.go:117] "RemoveContainer" containerID="194444ae046498e08f8f911d426d18ad3d7857b481964cff9f834815e3198cff" Jan 28 15:15:37 crc kubenswrapper[4893]: I0128 15:15:37.758170 4893 generic.go:334] "Generic (PLEG): container finished" podID="d191c46f-7ccf-4dc8-a2c8-477724435ff0" containerID="51e7fac59ae1c6e7dc75bd3a7eb47f5a310ce445591ec5b5b4ccbcc9be3e02cb" exitCode=0 Jan 28 15:15:37 crc kubenswrapper[4893]: I0128 15:15:37.758203 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qxps" event={"ID":"d191c46f-7ccf-4dc8-a2c8-477724435ff0","Type":"ContainerDied","Data":"51e7fac59ae1c6e7dc75bd3a7eb47f5a310ce445591ec5b5b4ccbcc9be3e02cb"} Jan 28 15:15:37 crc kubenswrapper[4893]: I0128 15:15:37.784019 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:37 crc kubenswrapper[4893]: I0128 15:15:37.916502 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d191c46f-7ccf-4dc8-a2c8-477724435ff0-catalog-content\") pod \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\" (UID: \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\") " Jan 28 15:15:37 crc kubenswrapper[4893]: I0128 15:15:37.916829 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d191c46f-7ccf-4dc8-a2c8-477724435ff0-utilities\") pod \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\" (UID: \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\") " Jan 28 15:15:37 crc kubenswrapper[4893]: I0128 15:15:37.916921 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w86hr\" (UniqueName: \"kubernetes.io/projected/d191c46f-7ccf-4dc8-a2c8-477724435ff0-kube-api-access-w86hr\") pod \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\" (UID: \"d191c46f-7ccf-4dc8-a2c8-477724435ff0\") " Jan 28 15:15:37 crc kubenswrapper[4893]: I0128 15:15:37.918356 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d191c46f-7ccf-4dc8-a2c8-477724435ff0-utilities" (OuterVolumeSpecName: "utilities") pod "d191c46f-7ccf-4dc8-a2c8-477724435ff0" (UID: "d191c46f-7ccf-4dc8-a2c8-477724435ff0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:15:37 crc kubenswrapper[4893]: I0128 15:15:37.921349 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d191c46f-7ccf-4dc8-a2c8-477724435ff0-kube-api-access-w86hr" (OuterVolumeSpecName: "kube-api-access-w86hr") pod "d191c46f-7ccf-4dc8-a2c8-477724435ff0" (UID: "d191c46f-7ccf-4dc8-a2c8-477724435ff0"). InnerVolumeSpecName "kube-api-access-w86hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.019154 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w86hr\" (UniqueName: \"kubernetes.io/projected/d191c46f-7ccf-4dc8-a2c8-477724435ff0-kube-api-access-w86hr\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.019253 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d191c46f-7ccf-4dc8-a2c8-477724435ff0-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.039733 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d191c46f-7ccf-4dc8-a2c8-477724435ff0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d191c46f-7ccf-4dc8-a2c8-477724435ff0" (UID: "d191c46f-7ccf-4dc8-a2c8-477724435ff0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.120034 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d191c46f-7ccf-4dc8-a2c8-477724435ff0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.663017 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ft8jl"] Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.663173 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.663772 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.667439 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk"] Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.667608 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.668211 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.674710 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw"] Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.674844 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.675269 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.692789 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-595f694657-cvf56"] Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.692931 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.693362 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.699945 4893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-ft8jl_openshift-nmstate_caa645f7-f683-4bda-851a-91732a41d8fc_0(b0a2ed2fe5c25514c25176e7e56e20912127e4d0b189c360a20e36a80757301e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.700018 4893 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-ft8jl_openshift-nmstate_caa645f7-f683-4bda-851a-91732a41d8fc_0(b0a2ed2fe5c25514c25176e7e56e20912127e4d0b189c360a20e36a80757301e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.700040 4893 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-ft8jl_openshift-nmstate_caa645f7-f683-4bda-851a-91732a41d8fc_0(b0a2ed2fe5c25514c25176e7e56e20912127e4d0b189c360a20e36a80757301e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.700095 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-metrics-54757c584b-ft8jl_openshift-nmstate(caa645f7-f683-4bda-851a-91732a41d8fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-metrics-54757c584b-ft8jl_openshift-nmstate(caa645f7-f683-4bda-851a-91732a41d8fc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-metrics-54757c584b-ft8jl_openshift-nmstate_caa645f7-f683-4bda-851a-91732a41d8fc_0(b0a2ed2fe5c25514c25176e7e56e20912127e4d0b189c360a20e36a80757301e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" podUID="caa645f7-f683-4bda-851a-91732a41d8fc" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.708385 4893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-rprnk_openshift-nmstate_e3a5ef47-65ea-4135-ad67-c83b0aa175f4_0(eb0ca7ca141e7884e999fcdd17c24ca48076d9e339383c73ec5874ab04bfefe6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.708525 4893 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-rprnk_openshift-nmstate_e3a5ef47-65ea-4135-ad67-c83b0aa175f4_0(eb0ca7ca141e7884e999fcdd17c24ca48076d9e339383c73ec5874ab04bfefe6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.708563 4893 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-rprnk_openshift-nmstate_e3a5ef47-65ea-4135-ad67-c83b0aa175f4_0(eb0ca7ca141e7884e999fcdd17c24ca48076d9e339383c73ec5874ab04bfefe6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.708626 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-webhook-8474b5b9d8-rprnk_openshift-nmstate(e3a5ef47-65ea-4135-ad67-c83b0aa175f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-webhook-8474b5b9d8-rprnk_openshift-nmstate(e3a5ef47-65ea-4135-ad67-c83b0aa175f4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-webhook-8474b5b9d8-rprnk_openshift-nmstate_e3a5ef47-65ea-4135-ad67-c83b0aa175f4_0(eb0ca7ca141e7884e999fcdd17c24ca48076d9e339383c73ec5874ab04bfefe6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" podUID="e3a5ef47-65ea-4135-ad67-c83b0aa175f4" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.727013 4893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-shqgw_openshift-nmstate_8a3c7538-e078-4e89-b34d-dd128942e19d_0(69a5623ee6f7441be3bafffe43f22c839daedb9111251ee91785b00756eec04e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.727107 4893 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-shqgw_openshift-nmstate_8a3c7538-e078-4e89-b34d-dd128942e19d_0(69a5623ee6f7441be3bafffe43f22c839daedb9111251ee91785b00756eec04e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.727132 4893 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-shqgw_openshift-nmstate_8a3c7538-e078-4e89-b34d-dd128942e19d_0(69a5623ee6f7441be3bafffe43f22c839daedb9111251ee91785b00756eec04e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.727185 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nmstate-console-plugin-7754f76f8b-shqgw_openshift-nmstate(8a3c7538-e078-4e89-b34d-dd128942e19d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nmstate-console-plugin-7754f76f8b-shqgw_openshift-nmstate(8a3c7538-e078-4e89-b34d-dd128942e19d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_nmstate-console-plugin-7754f76f8b-shqgw_openshift-nmstate_8a3c7538-e078-4e89-b34d-dd128942e19d_0(69a5623ee6f7441be3bafffe43f22c839daedb9111251ee91785b00756eec04e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" podUID="8a3c7538-e078-4e89-b34d-dd128942e19d" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.739159 4893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-595f694657-cvf56_openshift-console_ef9f2441-625b-46c4-8597-2ff7fc781dd0_0(cbf055b8d7912bacf17252173ab58702567bbf56af6b1355ffda24affbbafcc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.739240 4893 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-595f694657-cvf56_openshift-console_ef9f2441-625b-46c4-8597-2ff7fc781dd0_0(cbf055b8d7912bacf17252173ab58702567bbf56af6b1355ffda24affbbafcc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.739281 4893 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-595f694657-cvf56_openshift-console_ef9f2441-625b-46c4-8597-2ff7fc781dd0_0(cbf055b8d7912bacf17252173ab58702567bbf56af6b1355ffda24affbbafcc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:38 crc kubenswrapper[4893]: E0128 15:15:38.739326 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"console-595f694657-cvf56_openshift-console(ef9f2441-625b-46c4-8597-2ff7fc781dd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"console-595f694657-cvf56_openshift-console(ef9f2441-625b-46c4-8597-2ff7fc781dd0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-595f694657-cvf56_openshift-console_ef9f2441-625b-46c4-8597-2ff7fc781dd0_0(cbf055b8d7912bacf17252173ab58702567bbf56af6b1355ffda24affbbafcc7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-console/console-595f694657-cvf56" podUID="ef9f2441-625b-46c4-8597-2ff7fc781dd0" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.774264 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" event={"ID":"6d321a2f-5173-43fe-877f-5659444981a3","Type":"ContainerStarted","Data":"ff184b248979848da2cc46c1c7c9b5263dfdc6ee822ada69806ca6802e0797cf"} Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.775355 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.775551 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.775623 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.784065 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qxps" event={"ID":"d191c46f-7ccf-4dc8-a2c8-477724435ff0","Type":"ContainerDied","Data":"ad435714b1828298e4422a5210bed16e7137c6b6bc5dace18416967c8871851f"} Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.784117 4893 scope.go:117] "RemoveContainer" containerID="51e7fac59ae1c6e7dc75bd3a7eb47f5a310ce445591ec5b5b4ccbcc9be3e02cb" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.784166 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qxps" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.807854 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" podStartSLOduration=7.807838481 podStartE2EDuration="7.807838481s" podCreationTimestamp="2026-01-28 15:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:15:38.805190739 +0000 UTC m=+856.578805787" watchObservedRunningTime="2026-01-28 15:15:38.807838481 +0000 UTC m=+856.581453509" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.813654 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.828999 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.833585 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6qxps"] Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.838540 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6qxps"] Jan 28 15:15:38 crc kubenswrapper[4893]: I0128 15:15:38.899409 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d191c46f-7ccf-4dc8-a2c8-477724435ff0" path="/var/lib/kubelet/pods/d191c46f-7ccf-4dc8-a2c8-477724435ff0/volumes" Jan 28 15:15:39 crc kubenswrapper[4893]: I0128 15:15:39.007520 4893 scope.go:117] "RemoveContainer" containerID="a864d74b4ed038e2b08429de4141dcd9dbfb4fb13c6cb25a63d63d727fb5ea7d" Jan 28 15:15:39 crc kubenswrapper[4893]: I0128 15:15:39.029652 4893 scope.go:117] "RemoveContainer" containerID="c31b780ed1f5bdea01d642cf815504d2b93b6af5be22ede0251dff03b6fb4b69" Jan 28 15:15:39 crc kubenswrapper[4893]: I0128 15:15:39.791683 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerStarted","Data":"e5fb5a1f3773928c39eda437a9e56f4ecca599067083a7fd3baff85989507ed7"} Jan 28 15:15:43 crc kubenswrapper[4893]: I0128 15:15:43.034257 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2dw5k" Jan 28 15:15:48 crc kubenswrapper[4893]: I0128 15:15:48.891231 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" Jan 28 15:15:48 crc kubenswrapper[4893]: I0128 15:15:48.892280 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" Jan 28 15:15:49 crc kubenswrapper[4893]: I0128 15:15:49.139576 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ft8jl"] Jan 28 15:15:49 crc kubenswrapper[4893]: I0128 15:15:49.855585 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" event={"ID":"caa645f7-f683-4bda-851a-91732a41d8fc","Type":"ContainerStarted","Data":"cb4648a837544c136d4398b2d5bac152d515ee7a2b92d70ef55dd445d672c8f5"} Jan 28 15:15:49 crc kubenswrapper[4893]: I0128 15:15:49.891945 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:49 crc kubenswrapper[4893]: I0128 15:15:49.892509 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" Jan 28 15:15:50 crc kubenswrapper[4893]: I0128 15:15:50.067734 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw"] Jan 28 15:15:50 crc kubenswrapper[4893]: I0128 15:15:50.862459 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" event={"ID":"caa645f7-f683-4bda-851a-91732a41d8fc","Type":"ContainerStarted","Data":"86d24753ff088847a8ee66707d10484a4af5bd7511b11416f4723791c8edae1e"} Jan 28 15:15:50 crc kubenswrapper[4893]: I0128 15:15:50.863451 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" event={"ID":"8a3c7538-e078-4e89-b34d-dd128942e19d","Type":"ContainerStarted","Data":"331b967cbe28d82b3e4c0545c47bdd1f1ad1fa1d4aa9c6962550fb9b80f1aa1c"} Jan 28 15:15:50 crc kubenswrapper[4893]: I0128 15:15:50.891422 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:50 crc kubenswrapper[4893]: I0128 15:15:50.891993 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:51 crc kubenswrapper[4893]: I0128 15:15:51.156555 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-595f694657-cvf56"] Jan 28 15:15:51 crc kubenswrapper[4893]: I0128 15:15:51.871595 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595f694657-cvf56" event={"ID":"ef9f2441-625b-46c4-8597-2ff7fc781dd0","Type":"ContainerStarted","Data":"362af297fdb8b79d5c0de86ce5e16391d2481c47dfeae93f9b0c48b5b8de0eef"} Jan 28 15:15:51 crc kubenswrapper[4893]: I0128 15:15:51.872071 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595f694657-cvf56" event={"ID":"ef9f2441-625b-46c4-8597-2ff7fc781dd0","Type":"ContainerStarted","Data":"0453d4b7a2aac3c180128d445f8637c0c124f3d0f1072087d90f65a0e2996e68"} Jan 28 15:15:51 crc kubenswrapper[4893]: I0128 15:15:51.891936 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-595f694657-cvf56" podStartSLOduration=18.891899066 podStartE2EDuration="18.891899066s" podCreationTimestamp="2026-01-28 15:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:15:51.887871686 +0000 UTC m=+869.661486734" watchObservedRunningTime="2026-01-28 15:15:51.891899066 +0000 UTC m=+869.665514094" Jan 28 15:15:52 crc kubenswrapper[4893]: I0128 15:15:52.879997 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" event={"ID":"8a3c7538-e078-4e89-b34d-dd128942e19d","Type":"ContainerStarted","Data":"4cca593d38da1a22c4cd1fce22eb2f06ee9a11af95fe77341ece8dd481e3c856"} Jan 28 15:15:52 crc kubenswrapper[4893]: I0128 15:15:52.898722 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-shqgw" podStartSLOduration=18.832394984 podStartE2EDuration="20.898703139s" podCreationTimestamp="2026-01-28 15:15:32 +0000 UTC" firstStartedPulling="2026-01-28 15:15:50.082070422 +0000 UTC m=+867.855685450" lastFinishedPulling="2026-01-28 15:15:52.148378577 +0000 UTC m=+869.921993605" observedRunningTime="2026-01-28 15:15:52.897814685 +0000 UTC m=+870.671429943" watchObservedRunningTime="2026-01-28 15:15:52.898703139 +0000 UTC m=+870.672318177" Jan 28 15:15:53 crc kubenswrapper[4893]: I0128 15:15:53.392514 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:53 crc kubenswrapper[4893]: I0128 15:15:53.392769 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:53 crc kubenswrapper[4893]: I0128 15:15:53.399600 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:53 crc kubenswrapper[4893]: I0128 15:15:53.887068 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" event={"ID":"caa645f7-f683-4bda-851a-91732a41d8fc","Type":"ContainerStarted","Data":"93795ddd79264640d34de4f45a9a75fa9bf5345f698b849a292e3dea86aaebb6"} Jan 28 15:15:53 crc kubenswrapper[4893]: I0128 15:15:53.891189 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:53 crc kubenswrapper[4893]: I0128 15:15:53.891521 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:15:53 crc kubenswrapper[4893]: I0128 15:15:53.892079 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-595f694657-cvf56" Jan 28 15:15:53 crc kubenswrapper[4893]: I0128 15:15:53.921308 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft8jl" podStartSLOduration=17.4572286 podStartE2EDuration="21.921280403s" podCreationTimestamp="2026-01-28 15:15:32 +0000 UTC" firstStartedPulling="2026-01-28 15:15:49.145704254 +0000 UTC m=+866.919319282" lastFinishedPulling="2026-01-28 15:15:53.609756057 +0000 UTC m=+871.383371085" observedRunningTime="2026-01-28 15:15:53.906933261 +0000 UTC m=+871.680548299" watchObservedRunningTime="2026-01-28 15:15:53.921280403 +0000 UTC m=+871.694895441" Jan 28 15:15:53 crc kubenswrapper[4893]: I0128 15:15:53.991321 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vzxzx"] Jan 28 15:15:54 crc kubenswrapper[4893]: I0128 15:15:54.391281 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk"] Jan 28 15:15:54 crc kubenswrapper[4893]: I0128 15:15:54.898699 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" event={"ID":"e3a5ef47-65ea-4135-ad67-c83b0aa175f4","Type":"ContainerStarted","Data":"d6df89ef04e07c3d09250ae67c3a766981bddb0b20c292ff9f0b7f6fbaa44bd9"} Jan 28 15:15:55 crc kubenswrapper[4893]: I0128 15:15:55.899839 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" event={"ID":"e3a5ef47-65ea-4135-ad67-c83b0aa175f4","Type":"ContainerStarted","Data":"9b9f535c71645865252e0f357f37b6c05c16bf8ca0e35f48dc319e3861a88c01"} Jan 28 15:15:55 crc kubenswrapper[4893]: I0128 15:15:55.914426 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" podStartSLOduration=22.808023414 podStartE2EDuration="23.914401219s" podCreationTimestamp="2026-01-28 15:15:32 +0000 UTC" firstStartedPulling="2026-01-28 15:15:54.403520135 +0000 UTC m=+872.177135163" lastFinishedPulling="2026-01-28 15:15:55.50989794 +0000 UTC m=+873.283512968" observedRunningTime="2026-01-28 15:15:55.913644107 +0000 UTC m=+873.687259135" watchObservedRunningTime="2026-01-28 15:15:55.914401219 +0000 UTC m=+873.688016247" Jan 28 15:15:56 crc kubenswrapper[4893]: I0128 15:15:56.905382 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:16:01 crc kubenswrapper[4893]: I0128 15:16:01.847357 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4c84b" Jan 28 15:16:13 crc kubenswrapper[4893]: I0128 15:16:13.003467 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rprnk" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.051188 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-vzxzx" podUID="7d249efd-e40b-430f-98ec-9ad9c4e5cf70" containerName="console" containerID="cri-o://61e083f0dd5ac76c19377a90c083b3ee94542a7d8e66df52259c52833a38e95b" gracePeriod=15 Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.539275 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vzxzx_7d249efd-e40b-430f-98ec-9ad9c4e5cf70/console/0.log" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.540026 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.591616 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-serving-cert\") pod \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.591697 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rjts\" (UniqueName: \"kubernetes.io/projected/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-kube-api-access-6rjts\") pod \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.591725 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-service-ca\") pod \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.591743 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-trusted-ca-bundle\") pod \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.591804 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-oauth-config\") pod \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.591825 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-oauth-serving-cert\") pod \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.592878 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-service-ca" (OuterVolumeSpecName: "service-ca") pod "7d249efd-e40b-430f-98ec-9ad9c4e5cf70" (UID: "7d249efd-e40b-430f-98ec-9ad9c4e5cf70"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.592966 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7d249efd-e40b-430f-98ec-9ad9c4e5cf70" (UID: "7d249efd-e40b-430f-98ec-9ad9c4e5cf70"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.593073 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-config\") pod \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\" (UID: \"7d249efd-e40b-430f-98ec-9ad9c4e5cf70\") " Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.593053 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7d249efd-e40b-430f-98ec-9ad9c4e5cf70" (UID: "7d249efd-e40b-430f-98ec-9ad9c4e5cf70"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.593534 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-config" (OuterVolumeSpecName: "console-config") pod "7d249efd-e40b-430f-98ec-9ad9c4e5cf70" (UID: "7d249efd-e40b-430f-98ec-9ad9c4e5cf70"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.593765 4893 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.593792 4893 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.593808 4893 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.593823 4893 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.598690 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7d249efd-e40b-430f-98ec-9ad9c4e5cf70" (UID: "7d249efd-e40b-430f-98ec-9ad9c4e5cf70"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.598884 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-kube-api-access-6rjts" (OuterVolumeSpecName: "kube-api-access-6rjts") pod "7d249efd-e40b-430f-98ec-9ad9c4e5cf70" (UID: "7d249efd-e40b-430f-98ec-9ad9c4e5cf70"). InnerVolumeSpecName "kube-api-access-6rjts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.602953 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7d249efd-e40b-430f-98ec-9ad9c4e5cf70" (UID: "7d249efd-e40b-430f-98ec-9ad9c4e5cf70"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.695197 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rjts\" (UniqueName: \"kubernetes.io/projected/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-kube-api-access-6rjts\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.695597 4893 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:19 crc kubenswrapper[4893]: I0128 15:16:19.695614 4893 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d249efd-e40b-430f-98ec-9ad9c4e5cf70-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:20 crc kubenswrapper[4893]: I0128 15:16:20.032171 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vzxzx_7d249efd-e40b-430f-98ec-9ad9c4e5cf70/console/0.log" Jan 28 15:16:20 crc kubenswrapper[4893]: I0128 15:16:20.032227 4893 generic.go:334] "Generic (PLEG): container finished" podID="7d249efd-e40b-430f-98ec-9ad9c4e5cf70" containerID="61e083f0dd5ac76c19377a90c083b3ee94542a7d8e66df52259c52833a38e95b" exitCode=2 Jan 28 15:16:20 crc kubenswrapper[4893]: I0128 15:16:20.032264 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vzxzx" event={"ID":"7d249efd-e40b-430f-98ec-9ad9c4e5cf70","Type":"ContainerDied","Data":"61e083f0dd5ac76c19377a90c083b3ee94542a7d8e66df52259c52833a38e95b"} Jan 28 15:16:20 crc kubenswrapper[4893]: I0128 15:16:20.032300 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vzxzx" event={"ID":"7d249efd-e40b-430f-98ec-9ad9c4e5cf70","Type":"ContainerDied","Data":"94feddb56e2310a0d9a4fef68d89c33484f975433a10e263ce11830ea8a9699b"} Jan 28 15:16:20 crc kubenswrapper[4893]: I0128 15:16:20.032322 4893 scope.go:117] "RemoveContainer" containerID="61e083f0dd5ac76c19377a90c083b3ee94542a7d8e66df52259c52833a38e95b" Jan 28 15:16:20 crc kubenswrapper[4893]: I0128 15:16:20.032404 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vzxzx" Jan 28 15:16:20 crc kubenswrapper[4893]: I0128 15:16:20.057204 4893 scope.go:117] "RemoveContainer" containerID="61e083f0dd5ac76c19377a90c083b3ee94542a7d8e66df52259c52833a38e95b" Jan 28 15:16:20 crc kubenswrapper[4893]: E0128 15:16:20.058988 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61e083f0dd5ac76c19377a90c083b3ee94542a7d8e66df52259c52833a38e95b\": container with ID starting with 61e083f0dd5ac76c19377a90c083b3ee94542a7d8e66df52259c52833a38e95b not found: ID does not exist" containerID="61e083f0dd5ac76c19377a90c083b3ee94542a7d8e66df52259c52833a38e95b" Jan 28 15:16:20 crc kubenswrapper[4893]: I0128 15:16:20.059062 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e083f0dd5ac76c19377a90c083b3ee94542a7d8e66df52259c52833a38e95b"} err="failed to get container status \"61e083f0dd5ac76c19377a90c083b3ee94542a7d8e66df52259c52833a38e95b\": rpc error: code = NotFound desc = could not find container \"61e083f0dd5ac76c19377a90c083b3ee94542a7d8e66df52259c52833a38e95b\": container with ID starting with 61e083f0dd5ac76c19377a90c083b3ee94542a7d8e66df52259c52833a38e95b not found: ID does not exist" Jan 28 15:16:20 crc kubenswrapper[4893]: I0128 15:16:20.070248 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vzxzx"] Jan 28 15:16:20 crc kubenswrapper[4893]: I0128 15:16:20.073968 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-vzxzx"] Jan 28 15:16:20 crc kubenswrapper[4893]: I0128 15:16:20.900773 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d249efd-e40b-430f-98ec-9ad9c4e5cf70" path="/var/lib/kubelet/pods/7d249efd-e40b-430f-98ec-9ad9c4e5cf70/volumes" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.496682 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z"] Jan 28 15:16:25 crc kubenswrapper[4893]: E0128 15:16:25.497965 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d191c46f-7ccf-4dc8-a2c8-477724435ff0" containerName="extract-content" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.497983 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d191c46f-7ccf-4dc8-a2c8-477724435ff0" containerName="extract-content" Jan 28 15:16:25 crc kubenswrapper[4893]: E0128 15:16:25.497997 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d249efd-e40b-430f-98ec-9ad9c4e5cf70" containerName="console" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.498003 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d249efd-e40b-430f-98ec-9ad9c4e5cf70" containerName="console" Jan 28 15:16:25 crc kubenswrapper[4893]: E0128 15:16:25.498020 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d191c46f-7ccf-4dc8-a2c8-477724435ff0" containerName="extract-utilities" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.498026 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d191c46f-7ccf-4dc8-a2c8-477724435ff0" containerName="extract-utilities" Jan 28 15:16:25 crc kubenswrapper[4893]: E0128 15:16:25.498035 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d191c46f-7ccf-4dc8-a2c8-477724435ff0" containerName="registry-server" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.498110 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d191c46f-7ccf-4dc8-a2c8-477724435ff0" containerName="registry-server" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.498224 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d191c46f-7ccf-4dc8-a2c8-477724435ff0" containerName="registry-server" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.498235 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d249efd-e40b-430f-98ec-9ad9c4e5cf70" containerName="console" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.499079 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.501711 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.514166 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z"] Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.573415 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z\" (UID: \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.573491 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z\" (UID: \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.573529 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69gcx\" (UniqueName: \"kubernetes.io/projected/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-kube-api-access-69gcx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z\" (UID: \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.675398 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z\" (UID: \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.675452 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69gcx\" (UniqueName: \"kubernetes.io/projected/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-kube-api-access-69gcx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z\" (UID: \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.675764 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z\" (UID: \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.676205 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z\" (UID: \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.676207 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z\" (UID: \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.695276 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69gcx\" (UniqueName: \"kubernetes.io/projected/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-kube-api-access-69gcx\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z\" (UID: \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" Jan 28 15:16:25 crc kubenswrapper[4893]: I0128 15:16:25.870143 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" Jan 28 15:16:26 crc kubenswrapper[4893]: I0128 15:16:26.085657 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z"] Jan 28 15:16:26 crc kubenswrapper[4893]: W0128 15:16:26.087576 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f1a58bf_a2e1_4d5e_8e3e_ba58edec1d9a.slice/crio-91593ae44961c03c41f390193fedb0e08b37f53148ed3f60ebc89e1b031db695 WatchSource:0}: Error finding container 91593ae44961c03c41f390193fedb0e08b37f53148ed3f60ebc89e1b031db695: Status 404 returned error can't find the container with id 91593ae44961c03c41f390193fedb0e08b37f53148ed3f60ebc89e1b031db695 Jan 28 15:16:27 crc kubenswrapper[4893]: I0128 15:16:27.073010 4893 generic.go:334] "Generic (PLEG): container finished" podID="8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a" containerID="7ac11fd11e75483c26f6cd3358cc041cdb394ac4a11fcb343d78640c830dc6f7" exitCode=0 Jan 28 15:16:27 crc kubenswrapper[4893]: I0128 15:16:27.073069 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" event={"ID":"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a","Type":"ContainerDied","Data":"7ac11fd11e75483c26f6cd3358cc041cdb394ac4a11fcb343d78640c830dc6f7"} Jan 28 15:16:27 crc kubenswrapper[4893]: I0128 15:16:27.073129 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" event={"ID":"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a","Type":"ContainerStarted","Data":"91593ae44961c03c41f390193fedb0e08b37f53148ed3f60ebc89e1b031db695"} Jan 28 15:16:31 crc kubenswrapper[4893]: I0128 15:16:31.099634 4893 generic.go:334] "Generic (PLEG): container finished" podID="8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a" containerID="936beed191fa602c30672daebc83a54e3d94c159a94b14c7739aa832b3ef6bad" exitCode=0 Jan 28 15:16:31 crc kubenswrapper[4893]: I0128 15:16:31.100181 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" event={"ID":"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a","Type":"ContainerDied","Data":"936beed191fa602c30672daebc83a54e3d94c159a94b14c7739aa832b3ef6bad"} Jan 28 15:16:32 crc kubenswrapper[4893]: I0128 15:16:32.110407 4893 generic.go:334] "Generic (PLEG): container finished" podID="8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a" containerID="44121e1be25fd0f7392c29a294c2b039206b42f1b6320d443b7a22fce57fb40d" exitCode=0 Jan 28 15:16:32 crc kubenswrapper[4893]: I0128 15:16:32.110459 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" event={"ID":"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a","Type":"ContainerDied","Data":"44121e1be25fd0f7392c29a294c2b039206b42f1b6320d443b7a22fce57fb40d"} Jan 28 15:16:33 crc kubenswrapper[4893]: I0128 15:16:33.360778 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" Jan 28 15:16:33 crc kubenswrapper[4893]: I0128 15:16:33.479374 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-util\") pod \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\" (UID: \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\") " Jan 28 15:16:33 crc kubenswrapper[4893]: I0128 15:16:33.479512 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69gcx\" (UniqueName: \"kubernetes.io/projected/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-kube-api-access-69gcx\") pod \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\" (UID: \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\") " Jan 28 15:16:33 crc kubenswrapper[4893]: I0128 15:16:33.480626 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-bundle\") pod \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\" (UID: \"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a\") " Jan 28 15:16:33 crc kubenswrapper[4893]: I0128 15:16:33.480868 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-bundle" (OuterVolumeSpecName: "bundle") pod "8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a" (UID: "8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:16:33 crc kubenswrapper[4893]: I0128 15:16:33.481060 4893 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:33 crc kubenswrapper[4893]: I0128 15:16:33.485164 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-kube-api-access-69gcx" (OuterVolumeSpecName: "kube-api-access-69gcx") pod "8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a" (UID: "8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a"). InnerVolumeSpecName "kube-api-access-69gcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:16:33 crc kubenswrapper[4893]: I0128 15:16:33.490002 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-util" (OuterVolumeSpecName: "util") pod "8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a" (UID: "8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:16:33 crc kubenswrapper[4893]: I0128 15:16:33.581983 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69gcx\" (UniqueName: \"kubernetes.io/projected/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-kube-api-access-69gcx\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:33 crc kubenswrapper[4893]: I0128 15:16:33.582033 4893 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:34 crc kubenswrapper[4893]: I0128 15:16:34.122927 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" Jan 28 15:16:34 crc kubenswrapper[4893]: I0128 15:16:34.122918 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z" event={"ID":"8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a","Type":"ContainerDied","Data":"91593ae44961c03c41f390193fedb0e08b37f53148ed3f60ebc89e1b031db695"} Jan 28 15:16:34 crc kubenswrapper[4893]: I0128 15:16:34.123050 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91593ae44961c03c41f390193fedb0e08b37f53148ed3f60ebc89e1b031db695" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.233045 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24"] Jan 28 15:16:44 crc kubenswrapper[4893]: E0128 15:16:44.235255 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a" containerName="extract" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.235353 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a" containerName="extract" Jan 28 15:16:44 crc kubenswrapper[4893]: E0128 15:16:44.235416 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a" containerName="pull" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.235466 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a" containerName="pull" Jan 28 15:16:44 crc kubenswrapper[4893]: E0128 15:16:44.235554 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a" containerName="util" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.235607 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a" containerName="util" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.235786 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a" containerName="extract" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.236294 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.243452 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.243752 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.243989 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.244547 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.244723 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-vcbn5" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.268825 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24"] Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.407212 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a47ff79-34bc-48fe-aade-b4c90918419d-webhook-cert\") pod \"metallb-operator-controller-manager-5fb7f789ff-r8s24\" (UID: \"6a47ff79-34bc-48fe-aade-b4c90918419d\") " pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.407262 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a47ff79-34bc-48fe-aade-b4c90918419d-apiservice-cert\") pod \"metallb-operator-controller-manager-5fb7f789ff-r8s24\" (UID: \"6a47ff79-34bc-48fe-aade-b4c90918419d\") " pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.407434 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvv5x\" (UniqueName: \"kubernetes.io/projected/6a47ff79-34bc-48fe-aade-b4c90918419d-kube-api-access-nvv5x\") pod \"metallb-operator-controller-manager-5fb7f789ff-r8s24\" (UID: \"6a47ff79-34bc-48fe-aade-b4c90918419d\") " pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.470008 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8"] Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.470917 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.473190 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.473832 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.474513 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-qmpb9" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.484632 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8"] Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.510752 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a47ff79-34bc-48fe-aade-b4c90918419d-webhook-cert\") pod \"metallb-operator-controller-manager-5fb7f789ff-r8s24\" (UID: \"6a47ff79-34bc-48fe-aade-b4c90918419d\") " pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.510822 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a47ff79-34bc-48fe-aade-b4c90918419d-apiservice-cert\") pod \"metallb-operator-controller-manager-5fb7f789ff-r8s24\" (UID: \"6a47ff79-34bc-48fe-aade-b4c90918419d\") " pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.510888 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvv5x\" (UniqueName: \"kubernetes.io/projected/6a47ff79-34bc-48fe-aade-b4c90918419d-kube-api-access-nvv5x\") pod \"metallb-operator-controller-manager-5fb7f789ff-r8s24\" (UID: \"6a47ff79-34bc-48fe-aade-b4c90918419d\") " pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.520657 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a47ff79-34bc-48fe-aade-b4c90918419d-apiservice-cert\") pod \"metallb-operator-controller-manager-5fb7f789ff-r8s24\" (UID: \"6a47ff79-34bc-48fe-aade-b4c90918419d\") " pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.521526 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a47ff79-34bc-48fe-aade-b4c90918419d-webhook-cert\") pod \"metallb-operator-controller-manager-5fb7f789ff-r8s24\" (UID: \"6a47ff79-34bc-48fe-aade-b4c90918419d\") " pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.533991 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvv5x\" (UniqueName: \"kubernetes.io/projected/6a47ff79-34bc-48fe-aade-b4c90918419d-kube-api-access-nvv5x\") pod \"metallb-operator-controller-manager-5fb7f789ff-r8s24\" (UID: \"6a47ff79-34bc-48fe-aade-b4c90918419d\") " pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.552174 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.614214 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/228cc148-34cc-48ee-9a91-61a50b8d2759-webhook-cert\") pod \"metallb-operator-webhook-server-6d5bf7b7c8-trnh8\" (UID: \"228cc148-34cc-48ee-9a91-61a50b8d2759\") " pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.614264 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/228cc148-34cc-48ee-9a91-61a50b8d2759-apiservice-cert\") pod \"metallb-operator-webhook-server-6d5bf7b7c8-trnh8\" (UID: \"228cc148-34cc-48ee-9a91-61a50b8d2759\") " pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.614283 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj92v\" (UniqueName: \"kubernetes.io/projected/228cc148-34cc-48ee-9a91-61a50b8d2759-kube-api-access-tj92v\") pod \"metallb-operator-webhook-server-6d5bf7b7c8-trnh8\" (UID: \"228cc148-34cc-48ee-9a91-61a50b8d2759\") " pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.717030 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/228cc148-34cc-48ee-9a91-61a50b8d2759-webhook-cert\") pod \"metallb-operator-webhook-server-6d5bf7b7c8-trnh8\" (UID: \"228cc148-34cc-48ee-9a91-61a50b8d2759\") " pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.717596 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/228cc148-34cc-48ee-9a91-61a50b8d2759-apiservice-cert\") pod \"metallb-operator-webhook-server-6d5bf7b7c8-trnh8\" (UID: \"228cc148-34cc-48ee-9a91-61a50b8d2759\") " pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.717630 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj92v\" (UniqueName: \"kubernetes.io/projected/228cc148-34cc-48ee-9a91-61a50b8d2759-kube-api-access-tj92v\") pod \"metallb-operator-webhook-server-6d5bf7b7c8-trnh8\" (UID: \"228cc148-34cc-48ee-9a91-61a50b8d2759\") " pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.723302 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/228cc148-34cc-48ee-9a91-61a50b8d2759-apiservice-cert\") pod \"metallb-operator-webhook-server-6d5bf7b7c8-trnh8\" (UID: \"228cc148-34cc-48ee-9a91-61a50b8d2759\") " pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.723423 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/228cc148-34cc-48ee-9a91-61a50b8d2759-webhook-cert\") pod \"metallb-operator-webhook-server-6d5bf7b7c8-trnh8\" (UID: \"228cc148-34cc-48ee-9a91-61a50b8d2759\") " pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.738521 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj92v\" (UniqueName: \"kubernetes.io/projected/228cc148-34cc-48ee-9a91-61a50b8d2759-kube-api-access-tj92v\") pod \"metallb-operator-webhook-server-6d5bf7b7c8-trnh8\" (UID: \"228cc148-34cc-48ee-9a91-61a50b8d2759\") " pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.784538 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" Jan 28 15:16:44 crc kubenswrapper[4893]: I0128 15:16:44.834699 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24"] Jan 28 15:16:45 crc kubenswrapper[4893]: I0128 15:16:45.037177 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8"] Jan 28 15:16:45 crc kubenswrapper[4893]: W0128 15:16:45.044442 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod228cc148_34cc_48ee_9a91_61a50b8d2759.slice/crio-edcb80258011ff55794ac3758f7fb6b78d2be81173d65e6569e024e90ebe561f WatchSource:0}: Error finding container edcb80258011ff55794ac3758f7fb6b78d2be81173d65e6569e024e90ebe561f: Status 404 returned error can't find the container with id edcb80258011ff55794ac3758f7fb6b78d2be81173d65e6569e024e90ebe561f Jan 28 15:16:45 crc kubenswrapper[4893]: I0128 15:16:45.180498 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" event={"ID":"228cc148-34cc-48ee-9a91-61a50b8d2759","Type":"ContainerStarted","Data":"edcb80258011ff55794ac3758f7fb6b78d2be81173d65e6569e024e90ebe561f"} Jan 28 15:16:45 crc kubenswrapper[4893]: I0128 15:16:45.181789 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" event={"ID":"6a47ff79-34bc-48fe-aade-b4c90918419d","Type":"ContainerStarted","Data":"81db52b289dac483c675771f17dc16a4e08e50a868226ecd5aa0539d60864fb6"} Jan 28 15:16:53 crc kubenswrapper[4893]: I0128 15:16:53.225850 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" event={"ID":"6a47ff79-34bc-48fe-aade-b4c90918419d","Type":"ContainerStarted","Data":"f9e1365ad3fe421c8ef1a8ae9dbf0dbaccc9383fdd9ca00e0d58b66ca92ec412"} Jan 28 15:16:53 crc kubenswrapper[4893]: I0128 15:16:53.226575 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" Jan 28 15:16:53 crc kubenswrapper[4893]: I0128 15:16:53.269101 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" podStartSLOduration=1.242454137 podStartE2EDuration="9.269081022s" podCreationTimestamp="2026-01-28 15:16:44 +0000 UTC" firstStartedPulling="2026-01-28 15:16:44.849361363 +0000 UTC m=+922.622976391" lastFinishedPulling="2026-01-28 15:16:52.875988248 +0000 UTC m=+930.649603276" observedRunningTime="2026-01-28 15:16:53.265667648 +0000 UTC m=+931.039282676" watchObservedRunningTime="2026-01-28 15:16:53.269081022 +0000 UTC m=+931.042696050" Jan 28 15:16:55 crc kubenswrapper[4893]: I0128 15:16:55.238737 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" event={"ID":"228cc148-34cc-48ee-9a91-61a50b8d2759","Type":"ContainerStarted","Data":"3f35971d203392e67f7a83d8cd3b68dfcde777bf3671377fb470cb3c06dabc90"} Jan 28 15:16:55 crc kubenswrapper[4893]: I0128 15:16:55.257685 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" podStartSLOduration=2.380672937 podStartE2EDuration="11.257662832s" podCreationTimestamp="2026-01-28 15:16:44 +0000 UTC" firstStartedPulling="2026-01-28 15:16:45.047415286 +0000 UTC m=+922.821030324" lastFinishedPulling="2026-01-28 15:16:53.924405191 +0000 UTC m=+931.698020219" observedRunningTime="2026-01-28 15:16:55.255357448 +0000 UTC m=+933.028972476" watchObservedRunningTime="2026-01-28 15:16:55.257662832 +0000 UTC m=+933.031277860" Jan 28 15:16:56 crc kubenswrapper[4893]: I0128 15:16:56.245884 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" Jan 28 15:17:04 crc kubenswrapper[4893]: I0128 15:17:04.792038 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6d5bf7b7c8-trnh8" Jan 28 15:17:07 crc kubenswrapper[4893]: I0128 15:17:07.294857 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jzsvj"] Jan 28 15:17:07 crc kubenswrapper[4893]: I0128 15:17:07.296466 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:07 crc kubenswrapper[4893]: I0128 15:17:07.314438 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzsvj"] Jan 28 15:17:07 crc kubenswrapper[4893]: I0128 15:17:07.337251 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326622e0-ef8b-4c69-b82c-ed7eb8560c47-catalog-content\") pod \"redhat-marketplace-jzsvj\" (UID: \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\") " pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:07 crc kubenswrapper[4893]: I0128 15:17:07.337324 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc56j\" (UniqueName: \"kubernetes.io/projected/326622e0-ef8b-4c69-b82c-ed7eb8560c47-kube-api-access-tc56j\") pod \"redhat-marketplace-jzsvj\" (UID: \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\") " pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:07 crc kubenswrapper[4893]: I0128 15:17:07.337361 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326622e0-ef8b-4c69-b82c-ed7eb8560c47-utilities\") pod \"redhat-marketplace-jzsvj\" (UID: \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\") " pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:07 crc kubenswrapper[4893]: I0128 15:17:07.438078 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326622e0-ef8b-4c69-b82c-ed7eb8560c47-catalog-content\") pod \"redhat-marketplace-jzsvj\" (UID: \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\") " pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:07 crc kubenswrapper[4893]: I0128 15:17:07.438166 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc56j\" (UniqueName: \"kubernetes.io/projected/326622e0-ef8b-4c69-b82c-ed7eb8560c47-kube-api-access-tc56j\") pod \"redhat-marketplace-jzsvj\" (UID: \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\") " pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:07 crc kubenswrapper[4893]: I0128 15:17:07.438207 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326622e0-ef8b-4c69-b82c-ed7eb8560c47-utilities\") pod \"redhat-marketplace-jzsvj\" (UID: \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\") " pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:07 crc kubenswrapper[4893]: I0128 15:17:07.438658 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326622e0-ef8b-4c69-b82c-ed7eb8560c47-utilities\") pod \"redhat-marketplace-jzsvj\" (UID: \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\") " pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:07 crc kubenswrapper[4893]: I0128 15:17:07.438665 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326622e0-ef8b-4c69-b82c-ed7eb8560c47-catalog-content\") pod \"redhat-marketplace-jzsvj\" (UID: \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\") " pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:07 crc kubenswrapper[4893]: I0128 15:17:07.458772 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc56j\" (UniqueName: \"kubernetes.io/projected/326622e0-ef8b-4c69-b82c-ed7eb8560c47-kube-api-access-tc56j\") pod \"redhat-marketplace-jzsvj\" (UID: \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\") " pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:07 crc kubenswrapper[4893]: I0128 15:17:07.611970 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:07 crc kubenswrapper[4893]: I0128 15:17:07.821577 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzsvj"] Jan 28 15:17:07 crc kubenswrapper[4893]: W0128 15:17:07.830149 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod326622e0_ef8b_4c69_b82c_ed7eb8560c47.slice/crio-3ec0b8b1a4e3b47f1ba7df7a4595cb264a04da2a3552e41aca7725a40d6f439f WatchSource:0}: Error finding container 3ec0b8b1a4e3b47f1ba7df7a4595cb264a04da2a3552e41aca7725a40d6f439f: Status 404 returned error can't find the container with id 3ec0b8b1a4e3b47f1ba7df7a4595cb264a04da2a3552e41aca7725a40d6f439f Jan 28 15:17:08 crc kubenswrapper[4893]: I0128 15:17:08.313346 4893 generic.go:334] "Generic (PLEG): container finished" podID="326622e0-ef8b-4c69-b82c-ed7eb8560c47" containerID="47661717ae2339b9903471bf4125e54fc4ff74d505a935915d4085952b8b097a" exitCode=0 Jan 28 15:17:08 crc kubenswrapper[4893]: I0128 15:17:08.313457 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzsvj" event={"ID":"326622e0-ef8b-4c69-b82c-ed7eb8560c47","Type":"ContainerDied","Data":"47661717ae2339b9903471bf4125e54fc4ff74d505a935915d4085952b8b097a"} Jan 28 15:17:08 crc kubenswrapper[4893]: I0128 15:17:08.313690 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzsvj" event={"ID":"326622e0-ef8b-4c69-b82c-ed7eb8560c47","Type":"ContainerStarted","Data":"3ec0b8b1a4e3b47f1ba7df7a4595cb264a04da2a3552e41aca7725a40d6f439f"} Jan 28 15:17:09 crc kubenswrapper[4893]: I0128 15:17:09.320899 4893 generic.go:334] "Generic (PLEG): container finished" podID="326622e0-ef8b-4c69-b82c-ed7eb8560c47" containerID="054681d1de8bcd52fedb26c6f29bdad4f7459f27c8c20a578aa619408e231793" exitCode=0 Jan 28 15:17:09 crc kubenswrapper[4893]: I0128 15:17:09.321239 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzsvj" event={"ID":"326622e0-ef8b-4c69-b82c-ed7eb8560c47","Type":"ContainerDied","Data":"054681d1de8bcd52fedb26c6f29bdad4f7459f27c8c20a578aa619408e231793"} Jan 28 15:17:10 crc kubenswrapper[4893]: I0128 15:17:10.329670 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzsvj" event={"ID":"326622e0-ef8b-4c69-b82c-ed7eb8560c47","Type":"ContainerStarted","Data":"89ba2e7de41e32319b11e8d4ae034800e0e04a926a382146cb4de99a6e332de1"} Jan 28 15:17:10 crc kubenswrapper[4893]: I0128 15:17:10.352924 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jzsvj" podStartSLOduration=1.7914714680000001 podStartE2EDuration="3.352896244s" podCreationTimestamp="2026-01-28 15:17:07 +0000 UTC" firstStartedPulling="2026-01-28 15:17:08.314866869 +0000 UTC m=+946.088481897" lastFinishedPulling="2026-01-28 15:17:09.876291635 +0000 UTC m=+947.649906673" observedRunningTime="2026-01-28 15:17:10.347893796 +0000 UTC m=+948.121508904" watchObservedRunningTime="2026-01-28 15:17:10.352896244 +0000 UTC m=+948.126511302" Jan 28 15:17:17 crc kubenswrapper[4893]: I0128 15:17:17.612334 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:17 crc kubenswrapper[4893]: I0128 15:17:17.612977 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:17 crc kubenswrapper[4893]: I0128 15:17:17.657362 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:18 crc kubenswrapper[4893]: I0128 15:17:18.411602 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:19 crc kubenswrapper[4893]: I0128 15:17:19.886079 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzsvj"] Jan 28 15:17:20 crc kubenswrapper[4893]: I0128 15:17:20.383073 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jzsvj" podUID="326622e0-ef8b-4c69-b82c-ed7eb8560c47" containerName="registry-server" containerID="cri-o://89ba2e7de41e32319b11e8d4ae034800e0e04a926a382146cb4de99a6e332de1" gracePeriod=2 Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.324502 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.390317 4893 generic.go:334] "Generic (PLEG): container finished" podID="326622e0-ef8b-4c69-b82c-ed7eb8560c47" containerID="89ba2e7de41e32319b11e8d4ae034800e0e04a926a382146cb4de99a6e332de1" exitCode=0 Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.390406 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzsvj" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.390364 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzsvj" event={"ID":"326622e0-ef8b-4c69-b82c-ed7eb8560c47","Type":"ContainerDied","Data":"89ba2e7de41e32319b11e8d4ae034800e0e04a926a382146cb4de99a6e332de1"} Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.390517 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzsvj" event={"ID":"326622e0-ef8b-4c69-b82c-ed7eb8560c47","Type":"ContainerDied","Data":"3ec0b8b1a4e3b47f1ba7df7a4595cb264a04da2a3552e41aca7725a40d6f439f"} Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.390536 4893 scope.go:117] "RemoveContainer" containerID="89ba2e7de41e32319b11e8d4ae034800e0e04a926a382146cb4de99a6e332de1" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.405285 4893 scope.go:117] "RemoveContainer" containerID="054681d1de8bcd52fedb26c6f29bdad4f7459f27c8c20a578aa619408e231793" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.417926 4893 scope.go:117] "RemoveContainer" containerID="47661717ae2339b9903471bf4125e54fc4ff74d505a935915d4085952b8b097a" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.422049 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc56j\" (UniqueName: \"kubernetes.io/projected/326622e0-ef8b-4c69-b82c-ed7eb8560c47-kube-api-access-tc56j\") pod \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\" (UID: \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\") " Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.422176 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326622e0-ef8b-4c69-b82c-ed7eb8560c47-catalog-content\") pod \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\" (UID: \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\") " Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.424006 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326622e0-ef8b-4c69-b82c-ed7eb8560c47-utilities\") pod \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\" (UID: \"326622e0-ef8b-4c69-b82c-ed7eb8560c47\") " Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.425747 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/326622e0-ef8b-4c69-b82c-ed7eb8560c47-utilities" (OuterVolumeSpecName: "utilities") pod "326622e0-ef8b-4c69-b82c-ed7eb8560c47" (UID: "326622e0-ef8b-4c69-b82c-ed7eb8560c47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.428465 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/326622e0-ef8b-4c69-b82c-ed7eb8560c47-kube-api-access-tc56j" (OuterVolumeSpecName: "kube-api-access-tc56j") pod "326622e0-ef8b-4c69-b82c-ed7eb8560c47" (UID: "326622e0-ef8b-4c69-b82c-ed7eb8560c47"). InnerVolumeSpecName "kube-api-access-tc56j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.443009 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/326622e0-ef8b-4c69-b82c-ed7eb8560c47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "326622e0-ef8b-4c69-b82c-ed7eb8560c47" (UID: "326622e0-ef8b-4c69-b82c-ed7eb8560c47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.458665 4893 scope.go:117] "RemoveContainer" containerID="89ba2e7de41e32319b11e8d4ae034800e0e04a926a382146cb4de99a6e332de1" Jan 28 15:17:21 crc kubenswrapper[4893]: E0128 15:17:21.459345 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89ba2e7de41e32319b11e8d4ae034800e0e04a926a382146cb4de99a6e332de1\": container with ID starting with 89ba2e7de41e32319b11e8d4ae034800e0e04a926a382146cb4de99a6e332de1 not found: ID does not exist" containerID="89ba2e7de41e32319b11e8d4ae034800e0e04a926a382146cb4de99a6e332de1" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.459397 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89ba2e7de41e32319b11e8d4ae034800e0e04a926a382146cb4de99a6e332de1"} err="failed to get container status \"89ba2e7de41e32319b11e8d4ae034800e0e04a926a382146cb4de99a6e332de1\": rpc error: code = NotFound desc = could not find container \"89ba2e7de41e32319b11e8d4ae034800e0e04a926a382146cb4de99a6e332de1\": container with ID starting with 89ba2e7de41e32319b11e8d4ae034800e0e04a926a382146cb4de99a6e332de1 not found: ID does not exist" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.459420 4893 scope.go:117] "RemoveContainer" containerID="054681d1de8bcd52fedb26c6f29bdad4f7459f27c8c20a578aa619408e231793" Jan 28 15:17:21 crc kubenswrapper[4893]: E0128 15:17:21.459828 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"054681d1de8bcd52fedb26c6f29bdad4f7459f27c8c20a578aa619408e231793\": container with ID starting with 054681d1de8bcd52fedb26c6f29bdad4f7459f27c8c20a578aa619408e231793 not found: ID does not exist" containerID="054681d1de8bcd52fedb26c6f29bdad4f7459f27c8c20a578aa619408e231793" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.459868 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"054681d1de8bcd52fedb26c6f29bdad4f7459f27c8c20a578aa619408e231793"} err="failed to get container status \"054681d1de8bcd52fedb26c6f29bdad4f7459f27c8c20a578aa619408e231793\": rpc error: code = NotFound desc = could not find container \"054681d1de8bcd52fedb26c6f29bdad4f7459f27c8c20a578aa619408e231793\": container with ID starting with 054681d1de8bcd52fedb26c6f29bdad4f7459f27c8c20a578aa619408e231793 not found: ID does not exist" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.459884 4893 scope.go:117] "RemoveContainer" containerID="47661717ae2339b9903471bf4125e54fc4ff74d505a935915d4085952b8b097a" Jan 28 15:17:21 crc kubenswrapper[4893]: E0128 15:17:21.460319 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47661717ae2339b9903471bf4125e54fc4ff74d505a935915d4085952b8b097a\": container with ID starting with 47661717ae2339b9903471bf4125e54fc4ff74d505a935915d4085952b8b097a not found: ID does not exist" containerID="47661717ae2339b9903471bf4125e54fc4ff74d505a935915d4085952b8b097a" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.460350 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47661717ae2339b9903471bf4125e54fc4ff74d505a935915d4085952b8b097a"} err="failed to get container status \"47661717ae2339b9903471bf4125e54fc4ff74d505a935915d4085952b8b097a\": rpc error: code = NotFound desc = could not find container \"47661717ae2339b9903471bf4125e54fc4ff74d505a935915d4085952b8b097a\": container with ID starting with 47661717ae2339b9903471bf4125e54fc4ff74d505a935915d4085952b8b097a not found: ID does not exist" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.525938 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc56j\" (UniqueName: \"kubernetes.io/projected/326622e0-ef8b-4c69-b82c-ed7eb8560c47-kube-api-access-tc56j\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.525970 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/326622e0-ef8b-4c69-b82c-ed7eb8560c47-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.525980 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/326622e0-ef8b-4c69-b82c-ed7eb8560c47-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.721463 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzsvj"] Jan 28 15:17:21 crc kubenswrapper[4893]: I0128 15:17:21.724896 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzsvj"] Jan 28 15:17:22 crc kubenswrapper[4893]: I0128 15:17:22.897695 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="326622e0-ef8b-4c69-b82c-ed7eb8560c47" path="/var/lib/kubelet/pods/326622e0-ef8b-4c69-b82c-ed7eb8560c47/volumes" Jan 28 15:17:24 crc kubenswrapper[4893]: I0128 15:17:24.555778 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5fb7f789ff-r8s24" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.361482 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-scpzv"] Jan 28 15:17:25 crc kubenswrapper[4893]: E0128 15:17:25.361729 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326622e0-ef8b-4c69-b82c-ed7eb8560c47" containerName="extract-utilities" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.361744 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="326622e0-ef8b-4c69-b82c-ed7eb8560c47" containerName="extract-utilities" Jan 28 15:17:25 crc kubenswrapper[4893]: E0128 15:17:25.361760 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326622e0-ef8b-4c69-b82c-ed7eb8560c47" containerName="extract-content" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.361768 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="326622e0-ef8b-4c69-b82c-ed7eb8560c47" containerName="extract-content" Jan 28 15:17:25 crc kubenswrapper[4893]: E0128 15:17:25.361783 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326622e0-ef8b-4c69-b82c-ed7eb8560c47" containerName="registry-server" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.361789 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="326622e0-ef8b-4c69-b82c-ed7eb8560c47" containerName="registry-server" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.361890 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="326622e0-ef8b-4c69-b82c-ed7eb8560c47" containerName="registry-server" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.363805 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.365609 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.366732 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.366997 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-pv562" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.371223 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/14936c88-97a1-45bd-96f7-947ea39807a0-frr-conf\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.371326 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/14936c88-97a1-45bd-96f7-947ea39807a0-frr-sockets\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.371360 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/14936c88-97a1-45bd-96f7-947ea39807a0-metrics\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.371408 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/14936c88-97a1-45bd-96f7-947ea39807a0-frr-startup\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.371436 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrvrp\" (UniqueName: \"kubernetes.io/projected/14936c88-97a1-45bd-96f7-947ea39807a0-kube-api-access-qrvrp\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.371495 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/14936c88-97a1-45bd-96f7-947ea39807a0-reloader\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.371522 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14936c88-97a1-45bd-96f7-947ea39807a0-metrics-certs\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.372094 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w"] Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.372794 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.375164 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.388245 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w"] Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.472543 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/14936c88-97a1-45bd-96f7-947ea39807a0-frr-sockets\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.472599 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/14936c88-97a1-45bd-96f7-947ea39807a0-metrics\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.472638 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/14936c88-97a1-45bd-96f7-947ea39807a0-frr-startup\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.473041 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/14936c88-97a1-45bd-96f7-947ea39807a0-metrics\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.473251 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrvrp\" (UniqueName: \"kubernetes.io/projected/14936c88-97a1-45bd-96f7-947ea39807a0-kube-api-access-qrvrp\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.473285 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/14936c88-97a1-45bd-96f7-947ea39807a0-reloader\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.473275 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/14936c88-97a1-45bd-96f7-947ea39807a0-frr-sockets\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.473301 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14936c88-97a1-45bd-96f7-947ea39807a0-metrics-certs\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.473421 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/14936c88-97a1-45bd-96f7-947ea39807a0-frr-conf\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.473863 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/14936c88-97a1-45bd-96f7-947ea39807a0-frr-conf\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.474203 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/14936c88-97a1-45bd-96f7-947ea39807a0-reloader\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.476598 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/14936c88-97a1-45bd-96f7-947ea39807a0-frr-startup\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.481715 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/14936c88-97a1-45bd-96f7-947ea39807a0-metrics-certs\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.483251 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wr85l"] Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.484216 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wr85l" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.486843 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.487009 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.487116 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.487583 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-vlrbj" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.488273 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-9vnsm"] Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.489139 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-9vnsm" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.493662 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.498897 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrvrp\" (UniqueName: \"kubernetes.io/projected/14936c88-97a1-45bd-96f7-947ea39807a0-kube-api-access-qrvrp\") pod \"frr-k8s-scpzv\" (UID: \"14936c88-97a1-45bd-96f7-947ea39807a0\") " pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.499180 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-9vnsm"] Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.574705 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh6vn\" (UniqueName: \"kubernetes.io/projected/7e681161-fdf4-4d05-bc40-328c7368b9ac-kube-api-access-jh6vn\") pod \"frr-k8s-webhook-server-7df86c4f6c-9qz4w\" (UID: \"7e681161-fdf4-4d05-bc40-328c7368b9ac\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.575004 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e681161-fdf4-4d05-bc40-328c7368b9ac-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-9qz4w\" (UID: \"7e681161-fdf4-4d05-bc40-328c7368b9ac\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.677085 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-memberlist\") pod \"speaker-wr85l\" (UID: \"5cdd1458-e530-4bbc-9103-12b9f43ccbe9\") " pod="metallb-system/speaker-wr85l" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.677210 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dsxb\" (UniqueName: \"kubernetes.io/projected/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-kube-api-access-9dsxb\") pod \"speaker-wr85l\" (UID: \"5cdd1458-e530-4bbc-9103-12b9f43ccbe9\") " pod="metallb-system/speaker-wr85l" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.677242 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-metrics-certs\") pod \"speaker-wr85l\" (UID: \"5cdd1458-e530-4bbc-9103-12b9f43ccbe9\") " pod="metallb-system/speaker-wr85l" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.677265 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3f10444-f010-494d-936a-b3634dde0503-cert\") pod \"controller-6968d8fdc4-9vnsm\" (UID: \"f3f10444-f010-494d-936a-b3634dde0503\") " pod="metallb-system/controller-6968d8fdc4-9vnsm" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.677327 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3f10444-f010-494d-936a-b3634dde0503-metrics-certs\") pod \"controller-6968d8fdc4-9vnsm\" (UID: \"f3f10444-f010-494d-936a-b3634dde0503\") " pod="metallb-system/controller-6968d8fdc4-9vnsm" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.677374 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e681161-fdf4-4d05-bc40-328c7368b9ac-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-9qz4w\" (UID: \"7e681161-fdf4-4d05-bc40-328c7368b9ac\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.677399 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjk4t\" (UniqueName: \"kubernetes.io/projected/f3f10444-f010-494d-936a-b3634dde0503-kube-api-access-vjk4t\") pod \"controller-6968d8fdc4-9vnsm\" (UID: \"f3f10444-f010-494d-936a-b3634dde0503\") " pod="metallb-system/controller-6968d8fdc4-9vnsm" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.677432 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-metallb-excludel2\") pod \"speaker-wr85l\" (UID: \"5cdd1458-e530-4bbc-9103-12b9f43ccbe9\") " pod="metallb-system/speaker-wr85l" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.677496 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh6vn\" (UniqueName: \"kubernetes.io/projected/7e681161-fdf4-4d05-bc40-328c7368b9ac-kube-api-access-jh6vn\") pod \"frr-k8s-webhook-server-7df86c4f6c-9qz4w\" (UID: \"7e681161-fdf4-4d05-bc40-328c7368b9ac\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.685072 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.686452 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e681161-fdf4-4d05-bc40-328c7368b9ac-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-9qz4w\" (UID: \"7e681161-fdf4-4d05-bc40-328c7368b9ac\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.692909 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh6vn\" (UniqueName: \"kubernetes.io/projected/7e681161-fdf4-4d05-bc40-328c7368b9ac-kube-api-access-jh6vn\") pod \"frr-k8s-webhook-server-7df86c4f6c-9qz4w\" (UID: \"7e681161-fdf4-4d05-bc40-328c7368b9ac\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.704468 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.778610 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-memberlist\") pod \"speaker-wr85l\" (UID: \"5cdd1458-e530-4bbc-9103-12b9f43ccbe9\") " pod="metallb-system/speaker-wr85l" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.779001 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dsxb\" (UniqueName: \"kubernetes.io/projected/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-kube-api-access-9dsxb\") pod \"speaker-wr85l\" (UID: \"5cdd1458-e530-4bbc-9103-12b9f43ccbe9\") " pod="metallb-system/speaker-wr85l" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.779035 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-metrics-certs\") pod \"speaker-wr85l\" (UID: \"5cdd1458-e530-4bbc-9103-12b9f43ccbe9\") " pod="metallb-system/speaker-wr85l" Jan 28 15:17:25 crc kubenswrapper[4893]: E0128 15:17:25.779151 4893 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 15:17:25 crc kubenswrapper[4893]: E0128 15:17:25.779230 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-memberlist podName:5cdd1458-e530-4bbc-9103-12b9f43ccbe9 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:26.279203818 +0000 UTC m=+964.052818846 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-memberlist") pod "speaker-wr85l" (UID: "5cdd1458-e530-4bbc-9103-12b9f43ccbe9") : secret "metallb-memberlist" not found Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.784727 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3f10444-f010-494d-936a-b3634dde0503-cert\") pod \"controller-6968d8fdc4-9vnsm\" (UID: \"f3f10444-f010-494d-936a-b3634dde0503\") " pod="metallb-system/controller-6968d8fdc4-9vnsm" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.784856 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3f10444-f010-494d-936a-b3634dde0503-metrics-certs\") pod \"controller-6968d8fdc4-9vnsm\" (UID: \"f3f10444-f010-494d-936a-b3634dde0503\") " pod="metallb-system/controller-6968d8fdc4-9vnsm" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.784911 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjk4t\" (UniqueName: \"kubernetes.io/projected/f3f10444-f010-494d-936a-b3634dde0503-kube-api-access-vjk4t\") pod \"controller-6968d8fdc4-9vnsm\" (UID: \"f3f10444-f010-494d-936a-b3634dde0503\") " pod="metallb-system/controller-6968d8fdc4-9vnsm" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.784951 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-metallb-excludel2\") pod \"speaker-wr85l\" (UID: \"5cdd1458-e530-4bbc-9103-12b9f43ccbe9\") " pod="metallb-system/speaker-wr85l" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.785992 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-metallb-excludel2\") pod \"speaker-wr85l\" (UID: \"5cdd1458-e530-4bbc-9103-12b9f43ccbe9\") " pod="metallb-system/speaker-wr85l" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.788354 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-metrics-certs\") pod \"speaker-wr85l\" (UID: \"5cdd1458-e530-4bbc-9103-12b9f43ccbe9\") " pod="metallb-system/speaker-wr85l" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.788749 4893 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.790717 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f3f10444-f010-494d-936a-b3634dde0503-metrics-certs\") pod \"controller-6968d8fdc4-9vnsm\" (UID: \"f3f10444-f010-494d-936a-b3634dde0503\") " pod="metallb-system/controller-6968d8fdc4-9vnsm" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.795335 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dsxb\" (UniqueName: \"kubernetes.io/projected/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-kube-api-access-9dsxb\") pod \"speaker-wr85l\" (UID: \"5cdd1458-e530-4bbc-9103-12b9f43ccbe9\") " pod="metallb-system/speaker-wr85l" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.800040 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f3f10444-f010-494d-936a-b3634dde0503-cert\") pod \"controller-6968d8fdc4-9vnsm\" (UID: \"f3f10444-f010-494d-936a-b3634dde0503\") " pod="metallb-system/controller-6968d8fdc4-9vnsm" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.809748 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjk4t\" (UniqueName: \"kubernetes.io/projected/f3f10444-f010-494d-936a-b3634dde0503-kube-api-access-vjk4t\") pod \"controller-6968d8fdc4-9vnsm\" (UID: \"f3f10444-f010-494d-936a-b3634dde0503\") " pod="metallb-system/controller-6968d8fdc4-9vnsm" Jan 28 15:17:25 crc kubenswrapper[4893]: I0128 15:17:25.892034 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-9vnsm" Jan 28 15:17:26 crc kubenswrapper[4893]: I0128 15:17:26.085240 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-9vnsm"] Jan 28 15:17:26 crc kubenswrapper[4893]: W0128 15:17:26.090458 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3f10444_f010_494d_936a_b3634dde0503.slice/crio-198941b8a7e41c6f064b5df966b0d038896bbe1298e3c4a50fd6c4f030cd0c99 WatchSource:0}: Error finding container 198941b8a7e41c6f064b5df966b0d038896bbe1298e3c4a50fd6c4f030cd0c99: Status 404 returned error can't find the container with id 198941b8a7e41c6f064b5df966b0d038896bbe1298e3c4a50fd6c4f030cd0c99 Jan 28 15:17:26 crc kubenswrapper[4893]: I0128 15:17:26.128279 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w"] Jan 28 15:17:26 crc kubenswrapper[4893]: I0128 15:17:26.291947 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-memberlist\") pod \"speaker-wr85l\" (UID: \"5cdd1458-e530-4bbc-9103-12b9f43ccbe9\") " pod="metallb-system/speaker-wr85l" Jan 28 15:17:26 crc kubenswrapper[4893]: E0128 15:17:26.292256 4893 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 15:17:26 crc kubenswrapper[4893]: E0128 15:17:26.292379 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-memberlist podName:5cdd1458-e530-4bbc-9103-12b9f43ccbe9 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:27.292352355 +0000 UTC m=+965.065967393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-memberlist") pod "speaker-wr85l" (UID: "5cdd1458-e530-4bbc-9103-12b9f43ccbe9") : secret "metallb-memberlist" not found Jan 28 15:17:26 crc kubenswrapper[4893]: I0128 15:17:26.424706 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w" event={"ID":"7e681161-fdf4-4d05-bc40-328c7368b9ac","Type":"ContainerStarted","Data":"16c6d758b5159faf682d4b338c67a61bfb4af6f1b1affa8de0f5b1e84a79666d"} Jan 28 15:17:26 crc kubenswrapper[4893]: I0128 15:17:26.425906 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-scpzv" event={"ID":"14936c88-97a1-45bd-96f7-947ea39807a0","Type":"ContainerStarted","Data":"f59098805a91888d450c9fc0317aa30679eb6c595fe189be6daf348b54422008"} Jan 28 15:17:26 crc kubenswrapper[4893]: I0128 15:17:26.428670 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-9vnsm" event={"ID":"f3f10444-f010-494d-936a-b3634dde0503","Type":"ContainerStarted","Data":"caa6940c5b8442285607710b219a71637459d210f4ae2216c1ab62e5bdfcc808"} Jan 28 15:17:26 crc kubenswrapper[4893]: I0128 15:17:26.428740 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-9vnsm" event={"ID":"f3f10444-f010-494d-936a-b3634dde0503","Type":"ContainerStarted","Data":"a5e824c1d30c7294c39cc0abc5a58976d628e8b237621ceecd7052bcc291c74f"} Jan 28 15:17:26 crc kubenswrapper[4893]: I0128 15:17:26.428762 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-9vnsm" event={"ID":"f3f10444-f010-494d-936a-b3634dde0503","Type":"ContainerStarted","Data":"198941b8a7e41c6f064b5df966b0d038896bbe1298e3c4a50fd6c4f030cd0c99"} Jan 28 15:17:26 crc kubenswrapper[4893]: I0128 15:17:26.428881 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-9vnsm" Jan 28 15:17:26 crc kubenswrapper[4893]: I0128 15:17:26.451743 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-9vnsm" podStartSLOduration=1.451717031 podStartE2EDuration="1.451717031s" podCreationTimestamp="2026-01-28 15:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:17:26.445219272 +0000 UTC m=+964.218834300" watchObservedRunningTime="2026-01-28 15:17:26.451717031 +0000 UTC m=+964.225332059" Jan 28 15:17:27 crc kubenswrapper[4893]: I0128 15:17:27.309225 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-memberlist\") pod \"speaker-wr85l\" (UID: \"5cdd1458-e530-4bbc-9103-12b9f43ccbe9\") " pod="metallb-system/speaker-wr85l" Jan 28 15:17:27 crc kubenswrapper[4893]: I0128 15:17:27.315309 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5cdd1458-e530-4bbc-9103-12b9f43ccbe9-memberlist\") pod \"speaker-wr85l\" (UID: \"5cdd1458-e530-4bbc-9103-12b9f43ccbe9\") " pod="metallb-system/speaker-wr85l" Jan 28 15:17:27 crc kubenswrapper[4893]: I0128 15:17:27.383610 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wr85l" Jan 28 15:17:27 crc kubenswrapper[4893]: W0128 15:17:27.414621 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cdd1458_e530_4bbc_9103_12b9f43ccbe9.slice/crio-26cf4d78a1f1c5f4b44c7955d9e7503289edc7abad3f2fd361564a03693dd0d2 WatchSource:0}: Error finding container 26cf4d78a1f1c5f4b44c7955d9e7503289edc7abad3f2fd361564a03693dd0d2: Status 404 returned error can't find the container with id 26cf4d78a1f1c5f4b44c7955d9e7503289edc7abad3f2fd361564a03693dd0d2 Jan 28 15:17:27 crc kubenswrapper[4893]: I0128 15:17:27.434925 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wr85l" event={"ID":"5cdd1458-e530-4bbc-9103-12b9f43ccbe9","Type":"ContainerStarted","Data":"26cf4d78a1f1c5f4b44c7955d9e7503289edc7abad3f2fd361564a03693dd0d2"} Jan 28 15:17:28 crc kubenswrapper[4893]: I0128 15:17:28.449392 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wr85l" event={"ID":"5cdd1458-e530-4bbc-9103-12b9f43ccbe9","Type":"ContainerStarted","Data":"218a351c4508a7f02eea78d5ad5c0a99d214989c6692cf9b3625a7ab6cbf7568"} Jan 28 15:17:28 crc kubenswrapper[4893]: I0128 15:17:28.449809 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wr85l" Jan 28 15:17:28 crc kubenswrapper[4893]: I0128 15:17:28.449825 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wr85l" event={"ID":"5cdd1458-e530-4bbc-9103-12b9f43ccbe9","Type":"ContainerStarted","Data":"57ab6b112c3b8fd3775eeb5fbdcaef24a474830baad9ec8c37009eea56556e19"} Jan 28 15:17:28 crc kubenswrapper[4893]: I0128 15:17:28.495835 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wr85l" podStartSLOduration=3.495817493 podStartE2EDuration="3.495817493s" podCreationTimestamp="2026-01-28 15:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:17:28.491459513 +0000 UTC m=+966.265074561" watchObservedRunningTime="2026-01-28 15:17:28.495817493 +0000 UTC m=+966.269432521" Jan 28 15:17:35 crc kubenswrapper[4893]: I0128 15:17:35.508864 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w" event={"ID":"7e681161-fdf4-4d05-bc40-328c7368b9ac","Type":"ContainerStarted","Data":"f5f5783161fec7464450a553d1476ca1ca6197319d7f23f2f66916c4b90ca947"} Jan 28 15:17:35 crc kubenswrapper[4893]: I0128 15:17:35.509282 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w" Jan 28 15:17:35 crc kubenswrapper[4893]: I0128 15:17:35.510830 4893 generic.go:334] "Generic (PLEG): container finished" podID="14936c88-97a1-45bd-96f7-947ea39807a0" containerID="7f798e20de101aadb1ee0d5f5157f1683b0d0f7314718c474e68d2471040c75b" exitCode=0 Jan 28 15:17:35 crc kubenswrapper[4893]: I0128 15:17:35.510902 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-scpzv" event={"ID":"14936c88-97a1-45bd-96f7-947ea39807a0","Type":"ContainerDied","Data":"7f798e20de101aadb1ee0d5f5157f1683b0d0f7314718c474e68d2471040c75b"} Jan 28 15:17:35 crc kubenswrapper[4893]: I0128 15:17:35.528075 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w" podStartSLOduration=1.9294723010000001 podStartE2EDuration="10.528043945s" podCreationTimestamp="2026-01-28 15:17:25 +0000 UTC" firstStartedPulling="2026-01-28 15:17:26.133977046 +0000 UTC m=+963.907592064" lastFinishedPulling="2026-01-28 15:17:34.73254869 +0000 UTC m=+972.506163708" observedRunningTime="2026-01-28 15:17:35.524909239 +0000 UTC m=+973.298524287" watchObservedRunningTime="2026-01-28 15:17:35.528043945 +0000 UTC m=+973.301658963" Jan 28 15:17:36 crc kubenswrapper[4893]: I0128 15:17:36.518575 4893 generic.go:334] "Generic (PLEG): container finished" podID="14936c88-97a1-45bd-96f7-947ea39807a0" containerID="38caf26f1579f303bab890e97b11a5511256779ba06334a5bdf166933a0455fc" exitCode=0 Jan 28 15:17:36 crc kubenswrapper[4893]: I0128 15:17:36.518646 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-scpzv" event={"ID":"14936c88-97a1-45bd-96f7-947ea39807a0","Type":"ContainerDied","Data":"38caf26f1579f303bab890e97b11a5511256779ba06334a5bdf166933a0455fc"} Jan 28 15:17:37 crc kubenswrapper[4893]: I0128 15:17:37.388825 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wr85l" Jan 28 15:17:37 crc kubenswrapper[4893]: I0128 15:17:37.526579 4893 generic.go:334] "Generic (PLEG): container finished" podID="14936c88-97a1-45bd-96f7-947ea39807a0" containerID="a23e3b9f89d23f16ed569cede06f1f398cdc8becc5b06d5b5956c9de3cf2da4c" exitCode=0 Jan 28 15:17:37 crc kubenswrapper[4893]: I0128 15:17:37.526651 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-scpzv" event={"ID":"14936c88-97a1-45bd-96f7-947ea39807a0","Type":"ContainerDied","Data":"a23e3b9f89d23f16ed569cede06f1f398cdc8becc5b06d5b5956c9de3cf2da4c"} Jan 28 15:17:38 crc kubenswrapper[4893]: I0128 15:17:38.537138 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-scpzv" event={"ID":"14936c88-97a1-45bd-96f7-947ea39807a0","Type":"ContainerStarted","Data":"f58756879384d9c861acc22ff565a55752d75863991019d2ced38e68519573e0"} Jan 28 15:17:38 crc kubenswrapper[4893]: I0128 15:17:38.537474 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-scpzv" event={"ID":"14936c88-97a1-45bd-96f7-947ea39807a0","Type":"ContainerStarted","Data":"3fef096e3bfa17a1090f9d4f612cca57d9c2227b518fa8b75acdc908d75a5704"} Jan 28 15:17:38 crc kubenswrapper[4893]: I0128 15:17:38.537508 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-scpzv" event={"ID":"14936c88-97a1-45bd-96f7-947ea39807a0","Type":"ContainerStarted","Data":"16bf3c842eabb8d62ba56ca62a96c241090909f9b1677a36f38ad4df2d544278"} Jan 28 15:17:38 crc kubenswrapper[4893]: I0128 15:17:38.537518 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-scpzv" event={"ID":"14936c88-97a1-45bd-96f7-947ea39807a0","Type":"ContainerStarted","Data":"c20795517257ac7e189317b91296747916fb4d191fde04fa74e566a9c274d7eb"} Jan 28 15:17:38 crc kubenswrapper[4893]: I0128 15:17:38.903808 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m"] Jan 28 15:17:38 crc kubenswrapper[4893]: I0128 15:17:38.905602 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" Jan 28 15:17:38 crc kubenswrapper[4893]: I0128 15:17:38.912345 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 15:17:38 crc kubenswrapper[4893]: I0128 15:17:38.917190 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m"] Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.072671 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37c60b30-8d14-47d7-97ed-0f797932fe82-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m\" (UID: \"37c60b30-8d14-47d7-97ed-0f797932fe82\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.073208 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4pg4\" (UniqueName: \"kubernetes.io/projected/37c60b30-8d14-47d7-97ed-0f797932fe82-kube-api-access-j4pg4\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m\" (UID: \"37c60b30-8d14-47d7-97ed-0f797932fe82\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.073364 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37c60b30-8d14-47d7-97ed-0f797932fe82-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m\" (UID: \"37c60b30-8d14-47d7-97ed-0f797932fe82\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.175059 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37c60b30-8d14-47d7-97ed-0f797932fe82-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m\" (UID: \"37c60b30-8d14-47d7-97ed-0f797932fe82\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.175152 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4pg4\" (UniqueName: \"kubernetes.io/projected/37c60b30-8d14-47d7-97ed-0f797932fe82-kube-api-access-j4pg4\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m\" (UID: \"37c60b30-8d14-47d7-97ed-0f797932fe82\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.175207 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37c60b30-8d14-47d7-97ed-0f797932fe82-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m\" (UID: \"37c60b30-8d14-47d7-97ed-0f797932fe82\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.175617 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37c60b30-8d14-47d7-97ed-0f797932fe82-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m\" (UID: \"37c60b30-8d14-47d7-97ed-0f797932fe82\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.175786 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37c60b30-8d14-47d7-97ed-0f797932fe82-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m\" (UID: \"37c60b30-8d14-47d7-97ed-0f797932fe82\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.197028 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4pg4\" (UniqueName: \"kubernetes.io/projected/37c60b30-8d14-47d7-97ed-0f797932fe82-kube-api-access-j4pg4\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m\" (UID: \"37c60b30-8d14-47d7-97ed-0f797932fe82\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.265251 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.497814 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m"] Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.548903 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-scpzv" event={"ID":"14936c88-97a1-45bd-96f7-947ea39807a0","Type":"ContainerStarted","Data":"a1efe9b2a52340ac7eb812259ae1cb5f596f42a212281db44620b5c0651936f4"} Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.548951 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-scpzv" event={"ID":"14936c88-97a1-45bd-96f7-947ea39807a0","Type":"ContainerStarted","Data":"2b9e2d9bcd29911a7581a4dab3adedd0fdb9c83ce97bc72348d41015ac4b999e"} Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.550032 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.551750 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" event={"ID":"37c60b30-8d14-47d7-97ed-0f797932fe82","Type":"ContainerStarted","Data":"4c22023600f816e288bb1aedaecc9e9763fdc3242c19669423dfb18c7d6c3a3a"} Jan 28 15:17:39 crc kubenswrapper[4893]: I0128 15:17:39.574693 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-scpzv" podStartSLOduration=5.727407859 podStartE2EDuration="14.574674243s" podCreationTimestamp="2026-01-28 15:17:25 +0000 UTC" firstStartedPulling="2026-01-28 15:17:25.850758523 +0000 UTC m=+963.624373551" lastFinishedPulling="2026-01-28 15:17:34.698024907 +0000 UTC m=+972.471639935" observedRunningTime="2026-01-28 15:17:39.571941007 +0000 UTC m=+977.345556035" watchObservedRunningTime="2026-01-28 15:17:39.574674243 +0000 UTC m=+977.348289271" Jan 28 15:17:40 crc kubenswrapper[4893]: I0128 15:17:40.687107 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:40 crc kubenswrapper[4893]: I0128 15:17:40.726550 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:41 crc kubenswrapper[4893]: I0128 15:17:41.567573 4893 generic.go:334] "Generic (PLEG): container finished" podID="37c60b30-8d14-47d7-97ed-0f797932fe82" containerID="c8f45027b544c5550fcc26bc991baf8644866022b0486d5a911bdc1aea29633a" exitCode=0 Jan 28 15:17:41 crc kubenswrapper[4893]: I0128 15:17:41.567622 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" event={"ID":"37c60b30-8d14-47d7-97ed-0f797932fe82","Type":"ContainerDied","Data":"c8f45027b544c5550fcc26bc991baf8644866022b0486d5a911bdc1aea29633a"} Jan 28 15:17:45 crc kubenswrapper[4893]: I0128 15:17:45.711026 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9qz4w" Jan 28 15:17:45 crc kubenswrapper[4893]: I0128 15:17:45.896815 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-9vnsm" Jan 28 15:17:47 crc kubenswrapper[4893]: I0128 15:17:47.617756 4893 generic.go:334] "Generic (PLEG): container finished" podID="37c60b30-8d14-47d7-97ed-0f797932fe82" containerID="32607a5b4c343200b603c18865d358054a2dcc95f91106f5a4f0c94ba93c67bf" exitCode=0 Jan 28 15:17:47 crc kubenswrapper[4893]: I0128 15:17:47.617840 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" event={"ID":"37c60b30-8d14-47d7-97ed-0f797932fe82","Type":"ContainerDied","Data":"32607a5b4c343200b603c18865d358054a2dcc95f91106f5a4f0c94ba93c67bf"} Jan 28 15:17:48 crc kubenswrapper[4893]: I0128 15:17:48.626190 4893 generic.go:334] "Generic (PLEG): container finished" podID="37c60b30-8d14-47d7-97ed-0f797932fe82" containerID="afd0e4d1acb740d5e4fc4b3427cb7de2414c96672fece71104dece3bacd33a62" exitCode=0 Jan 28 15:17:48 crc kubenswrapper[4893]: I0128 15:17:48.626254 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" event={"ID":"37c60b30-8d14-47d7-97ed-0f797932fe82","Type":"ContainerDied","Data":"afd0e4d1acb740d5e4fc4b3427cb7de2414c96672fece71104dece3bacd33a62"} Jan 28 15:17:49 crc kubenswrapper[4893]: I0128 15:17:49.879047 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" Jan 28 15:17:50 crc kubenswrapper[4893]: I0128 15:17:50.044649 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37c60b30-8d14-47d7-97ed-0f797932fe82-util\") pod \"37c60b30-8d14-47d7-97ed-0f797932fe82\" (UID: \"37c60b30-8d14-47d7-97ed-0f797932fe82\") " Jan 28 15:17:50 crc kubenswrapper[4893]: I0128 15:17:50.044777 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4pg4\" (UniqueName: \"kubernetes.io/projected/37c60b30-8d14-47d7-97ed-0f797932fe82-kube-api-access-j4pg4\") pod \"37c60b30-8d14-47d7-97ed-0f797932fe82\" (UID: \"37c60b30-8d14-47d7-97ed-0f797932fe82\") " Jan 28 15:17:50 crc kubenswrapper[4893]: I0128 15:17:50.044987 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37c60b30-8d14-47d7-97ed-0f797932fe82-bundle\") pod \"37c60b30-8d14-47d7-97ed-0f797932fe82\" (UID: \"37c60b30-8d14-47d7-97ed-0f797932fe82\") " Jan 28 15:17:50 crc kubenswrapper[4893]: I0128 15:17:50.047272 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c60b30-8d14-47d7-97ed-0f797932fe82-bundle" (OuterVolumeSpecName: "bundle") pod "37c60b30-8d14-47d7-97ed-0f797932fe82" (UID: "37c60b30-8d14-47d7-97ed-0f797932fe82"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:50 crc kubenswrapper[4893]: I0128 15:17:50.055417 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c60b30-8d14-47d7-97ed-0f797932fe82-util" (OuterVolumeSpecName: "util") pod "37c60b30-8d14-47d7-97ed-0f797932fe82" (UID: "37c60b30-8d14-47d7-97ed-0f797932fe82"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:50 crc kubenswrapper[4893]: I0128 15:17:50.055420 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c60b30-8d14-47d7-97ed-0f797932fe82-kube-api-access-j4pg4" (OuterVolumeSpecName: "kube-api-access-j4pg4") pod "37c60b30-8d14-47d7-97ed-0f797932fe82" (UID: "37c60b30-8d14-47d7-97ed-0f797932fe82"). InnerVolumeSpecName "kube-api-access-j4pg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:50 crc kubenswrapper[4893]: I0128 15:17:50.146936 4893 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37c60b30-8d14-47d7-97ed-0f797932fe82-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:50 crc kubenswrapper[4893]: I0128 15:17:50.146981 4893 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37c60b30-8d14-47d7-97ed-0f797932fe82-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:50 crc kubenswrapper[4893]: I0128 15:17:50.146991 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4pg4\" (UniqueName: \"kubernetes.io/projected/37c60b30-8d14-47d7-97ed-0f797932fe82-kube-api-access-j4pg4\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:50 crc kubenswrapper[4893]: I0128 15:17:50.643131 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" event={"ID":"37c60b30-8d14-47d7-97ed-0f797932fe82","Type":"ContainerDied","Data":"4c22023600f816e288bb1aedaecc9e9763fdc3242c19669423dfb18c7d6c3a3a"} Jan 28 15:17:50 crc kubenswrapper[4893]: I0128 15:17:50.643211 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c22023600f816e288bb1aedaecc9e9763fdc3242c19669423dfb18c7d6c3a3a" Jan 28 15:17:50 crc kubenswrapper[4893]: I0128 15:17:50.643331 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m" Jan 28 15:17:55 crc kubenswrapper[4893]: I0128 15:17:55.690229 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-scpzv" Jan 28 15:17:56 crc kubenswrapper[4893]: I0128 15:17:56.872984 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-dnfrl"] Jan 28 15:17:56 crc kubenswrapper[4893]: E0128 15:17:56.873306 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c60b30-8d14-47d7-97ed-0f797932fe82" containerName="util" Jan 28 15:17:56 crc kubenswrapper[4893]: I0128 15:17:56.873324 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c60b30-8d14-47d7-97ed-0f797932fe82" containerName="util" Jan 28 15:17:56 crc kubenswrapper[4893]: E0128 15:17:56.873351 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c60b30-8d14-47d7-97ed-0f797932fe82" containerName="extract" Jan 28 15:17:56 crc kubenswrapper[4893]: I0128 15:17:56.873359 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c60b30-8d14-47d7-97ed-0f797932fe82" containerName="extract" Jan 28 15:17:56 crc kubenswrapper[4893]: E0128 15:17:56.873368 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c60b30-8d14-47d7-97ed-0f797932fe82" containerName="pull" Jan 28 15:17:56 crc kubenswrapper[4893]: I0128 15:17:56.873375 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c60b30-8d14-47d7-97ed-0f797932fe82" containerName="pull" Jan 28 15:17:56 crc kubenswrapper[4893]: I0128 15:17:56.873515 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c60b30-8d14-47d7-97ed-0f797932fe82" containerName="extract" Jan 28 15:17:56 crc kubenswrapper[4893]: I0128 15:17:56.874064 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-dnfrl" Jan 28 15:17:56 crc kubenswrapper[4893]: I0128 15:17:56.878826 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 28 15:17:56 crc kubenswrapper[4893]: I0128 15:17:56.879039 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 28 15:17:56 crc kubenswrapper[4893]: I0128 15:17:56.879148 4893 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-9dz47" Jan 28 15:17:56 crc kubenswrapper[4893]: I0128 15:17:56.909612 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-dnfrl"] Jan 28 15:17:57 crc kubenswrapper[4893]: I0128 15:17:57.050797 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbzwb\" (UniqueName: \"kubernetes.io/projected/75283d91-c9b1-4817-bb1c-7fce901a9b5e-kube-api-access-mbzwb\") pod \"cert-manager-operator-controller-manager-64cf6dff88-dnfrl\" (UID: \"75283d91-c9b1-4817-bb1c-7fce901a9b5e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-dnfrl" Jan 28 15:17:57 crc kubenswrapper[4893]: I0128 15:17:57.050853 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/75283d91-c9b1-4817-bb1c-7fce901a9b5e-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-dnfrl\" (UID: \"75283d91-c9b1-4817-bb1c-7fce901a9b5e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-dnfrl" Jan 28 15:17:57 crc kubenswrapper[4893]: I0128 15:17:57.152260 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbzwb\" (UniqueName: \"kubernetes.io/projected/75283d91-c9b1-4817-bb1c-7fce901a9b5e-kube-api-access-mbzwb\") pod \"cert-manager-operator-controller-manager-64cf6dff88-dnfrl\" (UID: \"75283d91-c9b1-4817-bb1c-7fce901a9b5e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-dnfrl" Jan 28 15:17:57 crc kubenswrapper[4893]: I0128 15:17:57.152326 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/75283d91-c9b1-4817-bb1c-7fce901a9b5e-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-dnfrl\" (UID: \"75283d91-c9b1-4817-bb1c-7fce901a9b5e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-dnfrl" Jan 28 15:17:57 crc kubenswrapper[4893]: I0128 15:17:57.152844 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/75283d91-c9b1-4817-bb1c-7fce901a9b5e-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-dnfrl\" (UID: \"75283d91-c9b1-4817-bb1c-7fce901a9b5e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-dnfrl" Jan 28 15:17:57 crc kubenswrapper[4893]: I0128 15:17:57.172907 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbzwb\" (UniqueName: \"kubernetes.io/projected/75283d91-c9b1-4817-bb1c-7fce901a9b5e-kube-api-access-mbzwb\") pod \"cert-manager-operator-controller-manager-64cf6dff88-dnfrl\" (UID: \"75283d91-c9b1-4817-bb1c-7fce901a9b5e\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-dnfrl" Jan 28 15:17:57 crc kubenswrapper[4893]: I0128 15:17:57.198051 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-dnfrl" Jan 28 15:17:57 crc kubenswrapper[4893]: I0128 15:17:57.844559 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-dnfrl"] Jan 28 15:17:57 crc kubenswrapper[4893]: W0128 15:17:57.858869 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75283d91_c9b1_4817_bb1c_7fce901a9b5e.slice/crio-ac6851354dc62d48244c1b46cc5f9adf9685590025982bd41096bf4700d6c16e WatchSource:0}: Error finding container ac6851354dc62d48244c1b46cc5f9adf9685590025982bd41096bf4700d6c16e: Status 404 returned error can't find the container with id ac6851354dc62d48244c1b46cc5f9adf9685590025982bd41096bf4700d6c16e Jan 28 15:17:58 crc kubenswrapper[4893]: I0128 15:17:58.692012 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-dnfrl" event={"ID":"75283d91-c9b1-4817-bb1c-7fce901a9b5e","Type":"ContainerStarted","Data":"ac6851354dc62d48244c1b46cc5f9adf9685590025982bd41096bf4700d6c16e"} Jan 28 15:18:05 crc kubenswrapper[4893]: I0128 15:18:05.722639 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:18:05 crc kubenswrapper[4893]: I0128 15:18:05.723205 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:18:06 crc kubenswrapper[4893]: I0128 15:18:06.767097 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-dnfrl" event={"ID":"75283d91-c9b1-4817-bb1c-7fce901a9b5e","Type":"ContainerStarted","Data":"87695d2da8fdbe19b61bace61670e15792b8e2136efc970f85ec81e02cbcd0e4"} Jan 28 15:18:06 crc kubenswrapper[4893]: I0128 15:18:06.800764 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-dnfrl" podStartSLOduration=2.854288712 podStartE2EDuration="10.800728285s" podCreationTimestamp="2026-01-28 15:17:56 +0000 UTC" firstStartedPulling="2026-01-28 15:17:57.864584678 +0000 UTC m=+995.638199706" lastFinishedPulling="2026-01-28 15:18:05.811024261 +0000 UTC m=+1003.584639279" observedRunningTime="2026-01-28 15:18:06.793132405 +0000 UTC m=+1004.566747433" watchObservedRunningTime="2026-01-28 15:18:06.800728285 +0000 UTC m=+1004.574343313" Jan 28 15:18:10 crc kubenswrapper[4893]: I0128 15:18:10.317671 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-nlv8z"] Jan 28 15:18:10 crc kubenswrapper[4893]: I0128 15:18:10.318990 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-nlv8z" Jan 28 15:18:10 crc kubenswrapper[4893]: I0128 15:18:10.330707 4893 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-82qk2" Jan 28 15:18:10 crc kubenswrapper[4893]: I0128 15:18:10.330997 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 28 15:18:10 crc kubenswrapper[4893]: I0128 15:18:10.331599 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 28 15:18:10 crc kubenswrapper[4893]: I0128 15:18:10.333645 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-nlv8z"] Jan 28 15:18:10 crc kubenswrapper[4893]: I0128 15:18:10.471237 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfj4q\" (UniqueName: \"kubernetes.io/projected/8f982557-1def-4e14-868b-59a20e936677-kube-api-access-tfj4q\") pod \"cert-manager-webhook-f4fb5df64-nlv8z\" (UID: \"8f982557-1def-4e14-868b-59a20e936677\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nlv8z" Jan 28 15:18:10 crc kubenswrapper[4893]: I0128 15:18:10.471408 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f982557-1def-4e14-868b-59a20e936677-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-nlv8z\" (UID: \"8f982557-1def-4e14-868b-59a20e936677\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nlv8z" Jan 28 15:18:10 crc kubenswrapper[4893]: I0128 15:18:10.572515 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f982557-1def-4e14-868b-59a20e936677-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-nlv8z\" (UID: \"8f982557-1def-4e14-868b-59a20e936677\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nlv8z" Jan 28 15:18:10 crc kubenswrapper[4893]: I0128 15:18:10.572627 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfj4q\" (UniqueName: \"kubernetes.io/projected/8f982557-1def-4e14-868b-59a20e936677-kube-api-access-tfj4q\") pod \"cert-manager-webhook-f4fb5df64-nlv8z\" (UID: \"8f982557-1def-4e14-868b-59a20e936677\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nlv8z" Jan 28 15:18:10 crc kubenswrapper[4893]: I0128 15:18:10.600989 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfj4q\" (UniqueName: \"kubernetes.io/projected/8f982557-1def-4e14-868b-59a20e936677-kube-api-access-tfj4q\") pod \"cert-manager-webhook-f4fb5df64-nlv8z\" (UID: \"8f982557-1def-4e14-868b-59a20e936677\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nlv8z" Jan 28 15:18:10 crc kubenswrapper[4893]: I0128 15:18:10.603200 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f982557-1def-4e14-868b-59a20e936677-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-nlv8z\" (UID: \"8f982557-1def-4e14-868b-59a20e936677\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-nlv8z" Jan 28 15:18:10 crc kubenswrapper[4893]: I0128 15:18:10.641812 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-nlv8z" Jan 28 15:18:11 crc kubenswrapper[4893]: I0128 15:18:11.115957 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-nlv8z"] Jan 28 15:18:11 crc kubenswrapper[4893]: I0128 15:18:11.289527 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-5gtt7"] Jan 28 15:18:11 crc kubenswrapper[4893]: I0128 15:18:11.291189 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5gtt7" Jan 28 15:18:11 crc kubenswrapper[4893]: I0128 15:18:11.302604 4893 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-cn4bx" Jan 28 15:18:11 crc kubenswrapper[4893]: I0128 15:18:11.314489 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-5gtt7"] Jan 28 15:18:11 crc kubenswrapper[4893]: I0128 15:18:11.384487 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20d6abd9-a533-4fcd-abab-402ace4af89f-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-5gtt7\" (UID: \"20d6abd9-a533-4fcd-abab-402ace4af89f\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5gtt7" Jan 28 15:18:11 crc kubenswrapper[4893]: I0128 15:18:11.384635 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9br8\" (UniqueName: \"kubernetes.io/projected/20d6abd9-a533-4fcd-abab-402ace4af89f-kube-api-access-q9br8\") pod \"cert-manager-cainjector-855d9ccff4-5gtt7\" (UID: \"20d6abd9-a533-4fcd-abab-402ace4af89f\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5gtt7" Jan 28 15:18:11 crc kubenswrapper[4893]: I0128 15:18:11.486003 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9br8\" (UniqueName: \"kubernetes.io/projected/20d6abd9-a533-4fcd-abab-402ace4af89f-kube-api-access-q9br8\") pod \"cert-manager-cainjector-855d9ccff4-5gtt7\" (UID: \"20d6abd9-a533-4fcd-abab-402ace4af89f\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5gtt7" Jan 28 15:18:11 crc kubenswrapper[4893]: I0128 15:18:11.486404 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20d6abd9-a533-4fcd-abab-402ace4af89f-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-5gtt7\" (UID: \"20d6abd9-a533-4fcd-abab-402ace4af89f\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5gtt7" Jan 28 15:18:11 crc kubenswrapper[4893]: I0128 15:18:11.517213 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20d6abd9-a533-4fcd-abab-402ace4af89f-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-5gtt7\" (UID: \"20d6abd9-a533-4fcd-abab-402ace4af89f\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5gtt7" Jan 28 15:18:11 crc kubenswrapper[4893]: I0128 15:18:11.517398 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9br8\" (UniqueName: \"kubernetes.io/projected/20d6abd9-a533-4fcd-abab-402ace4af89f-kube-api-access-q9br8\") pod \"cert-manager-cainjector-855d9ccff4-5gtt7\" (UID: \"20d6abd9-a533-4fcd-abab-402ace4af89f\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5gtt7" Jan 28 15:18:11 crc kubenswrapper[4893]: I0128 15:18:11.609955 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5gtt7" Jan 28 15:18:11 crc kubenswrapper[4893]: I0128 15:18:11.801349 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-nlv8z" event={"ID":"8f982557-1def-4e14-868b-59a20e936677","Type":"ContainerStarted","Data":"20cdbf1eca6967c18585868cba207d135a1cbfbb09c79e0b240f3e7be5565cd8"} Jan 28 15:18:12 crc kubenswrapper[4893]: W0128 15:18:12.071704 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20d6abd9_a533_4fcd_abab_402ace4af89f.slice/crio-dfd33e8138cf8af9bf50fcda30d8c4ad5b07cf4d936e33ebc2271597883040d9 WatchSource:0}: Error finding container dfd33e8138cf8af9bf50fcda30d8c4ad5b07cf4d936e33ebc2271597883040d9: Status 404 returned error can't find the container with id dfd33e8138cf8af9bf50fcda30d8c4ad5b07cf4d936e33ebc2271597883040d9 Jan 28 15:18:12 crc kubenswrapper[4893]: I0128 15:18:12.091440 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-5gtt7"] Jan 28 15:18:12 crc kubenswrapper[4893]: I0128 15:18:12.829168 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5gtt7" event={"ID":"20d6abd9-a533-4fcd-abab-402ace4af89f","Type":"ContainerStarted","Data":"dfd33e8138cf8af9bf50fcda30d8c4ad5b07cf4d936e33ebc2271597883040d9"} Jan 28 15:18:20 crc kubenswrapper[4893]: I0128 15:18:20.848852 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-ffj99"] Jan 28 15:18:20 crc kubenswrapper[4893]: I0128 15:18:20.851409 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-ffj99" Jan 28 15:18:20 crc kubenswrapper[4893]: I0128 15:18:20.853559 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-ffj99"] Jan 28 15:18:20 crc kubenswrapper[4893]: I0128 15:18:20.853997 4893 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-56p7g" Jan 28 15:18:20 crc kubenswrapper[4893]: I0128 15:18:20.911652 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3f91a14-0acc-4ebe-8e34-4d8be1758b80-bound-sa-token\") pod \"cert-manager-86cb77c54b-ffj99\" (UID: \"b3f91a14-0acc-4ebe-8e34-4d8be1758b80\") " pod="cert-manager/cert-manager-86cb77c54b-ffj99" Jan 28 15:18:20 crc kubenswrapper[4893]: I0128 15:18:20.911922 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjsgx\" (UniqueName: \"kubernetes.io/projected/b3f91a14-0acc-4ebe-8e34-4d8be1758b80-kube-api-access-gjsgx\") pod \"cert-manager-86cb77c54b-ffj99\" (UID: \"b3f91a14-0acc-4ebe-8e34-4d8be1758b80\") " pod="cert-manager/cert-manager-86cb77c54b-ffj99" Jan 28 15:18:21 crc kubenswrapper[4893]: I0128 15:18:21.013842 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjsgx\" (UniqueName: \"kubernetes.io/projected/b3f91a14-0acc-4ebe-8e34-4d8be1758b80-kube-api-access-gjsgx\") pod \"cert-manager-86cb77c54b-ffj99\" (UID: \"b3f91a14-0acc-4ebe-8e34-4d8be1758b80\") " pod="cert-manager/cert-manager-86cb77c54b-ffj99" Jan 28 15:18:21 crc kubenswrapper[4893]: I0128 15:18:21.014216 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3f91a14-0acc-4ebe-8e34-4d8be1758b80-bound-sa-token\") pod \"cert-manager-86cb77c54b-ffj99\" (UID: \"b3f91a14-0acc-4ebe-8e34-4d8be1758b80\") " pod="cert-manager/cert-manager-86cb77c54b-ffj99" Jan 28 15:18:21 crc kubenswrapper[4893]: I0128 15:18:21.038800 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b3f91a14-0acc-4ebe-8e34-4d8be1758b80-bound-sa-token\") pod \"cert-manager-86cb77c54b-ffj99\" (UID: \"b3f91a14-0acc-4ebe-8e34-4d8be1758b80\") " pod="cert-manager/cert-manager-86cb77c54b-ffj99" Jan 28 15:18:21 crc kubenswrapper[4893]: I0128 15:18:21.043265 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjsgx\" (UniqueName: \"kubernetes.io/projected/b3f91a14-0acc-4ebe-8e34-4d8be1758b80-kube-api-access-gjsgx\") pod \"cert-manager-86cb77c54b-ffj99\" (UID: \"b3f91a14-0acc-4ebe-8e34-4d8be1758b80\") " pod="cert-manager/cert-manager-86cb77c54b-ffj99" Jan 28 15:18:21 crc kubenswrapper[4893]: I0128 15:18:21.182963 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-ffj99" Jan 28 15:18:22 crc kubenswrapper[4893]: I0128 15:18:22.918378 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-ffj99"] Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.232880 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vscg2"] Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.234861 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.255074 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vscg2"] Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.263877 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6044f198-2b5c-4b77-be7b-7406b7287ff0-catalog-content\") pod \"community-operators-vscg2\" (UID: \"6044f198-2b5c-4b77-be7b-7406b7287ff0\") " pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.263944 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kggvw\" (UniqueName: \"kubernetes.io/projected/6044f198-2b5c-4b77-be7b-7406b7287ff0-kube-api-access-kggvw\") pod \"community-operators-vscg2\" (UID: \"6044f198-2b5c-4b77-be7b-7406b7287ff0\") " pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.263990 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6044f198-2b5c-4b77-be7b-7406b7287ff0-utilities\") pod \"community-operators-vscg2\" (UID: \"6044f198-2b5c-4b77-be7b-7406b7287ff0\") " pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.366178 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6044f198-2b5c-4b77-be7b-7406b7287ff0-catalog-content\") pod \"community-operators-vscg2\" (UID: \"6044f198-2b5c-4b77-be7b-7406b7287ff0\") " pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.367231 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kggvw\" (UniqueName: \"kubernetes.io/projected/6044f198-2b5c-4b77-be7b-7406b7287ff0-kube-api-access-kggvw\") pod \"community-operators-vscg2\" (UID: \"6044f198-2b5c-4b77-be7b-7406b7287ff0\") " pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.367375 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6044f198-2b5c-4b77-be7b-7406b7287ff0-utilities\") pod \"community-operators-vscg2\" (UID: \"6044f198-2b5c-4b77-be7b-7406b7287ff0\") " pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.367924 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6044f198-2b5c-4b77-be7b-7406b7287ff0-utilities\") pod \"community-operators-vscg2\" (UID: \"6044f198-2b5c-4b77-be7b-7406b7287ff0\") " pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.367147 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6044f198-2b5c-4b77-be7b-7406b7287ff0-catalog-content\") pod \"community-operators-vscg2\" (UID: \"6044f198-2b5c-4b77-be7b-7406b7287ff0\") " pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.394820 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kggvw\" (UniqueName: \"kubernetes.io/projected/6044f198-2b5c-4b77-be7b-7406b7287ff0-kube-api-access-kggvw\") pod \"community-operators-vscg2\" (UID: \"6044f198-2b5c-4b77-be7b-7406b7287ff0\") " pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.550744 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.918888 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5gtt7" event={"ID":"20d6abd9-a533-4fcd-abab-402ace4af89f","Type":"ContainerStarted","Data":"2748a08cbfc60acbfd5777012771b234634b0fc962e9417d3690867b9b387a89"} Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.921375 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-ffj99" event={"ID":"b3f91a14-0acc-4ebe-8e34-4d8be1758b80","Type":"ContainerStarted","Data":"78813ee39e3bdb7a48533fcf057e1c16a3f88ca18cb4e9c83e554b090e45d450"} Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.921412 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-ffj99" event={"ID":"b3f91a14-0acc-4ebe-8e34-4d8be1758b80","Type":"ContainerStarted","Data":"17525253a1d6a99bb83874ed05053d15c4bf92cca1b6696f1b1e0f710c32062f"} Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.923346 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-nlv8z" event={"ID":"8f982557-1def-4e14-868b-59a20e936677","Type":"ContainerStarted","Data":"554657d1a30f28cfcd5076a258564b73161b394964e0e44a695066b49424bd9e"} Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.923505 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-nlv8z" Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.945687 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5gtt7" podStartSLOduration=2.167288173 podStartE2EDuration="12.945663323s" podCreationTimestamp="2026-01-28 15:18:11 +0000 UTC" firstStartedPulling="2026-01-28 15:18:12.082928288 +0000 UTC m=+1009.856543316" lastFinishedPulling="2026-01-28 15:18:22.861303428 +0000 UTC m=+1020.634918466" observedRunningTime="2026-01-28 15:18:23.939718419 +0000 UTC m=+1021.713333467" watchObservedRunningTime="2026-01-28 15:18:23.945663323 +0000 UTC m=+1021.719278351" Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.961501 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-ffj99" podStartSLOduration=3.961467768 podStartE2EDuration="3.961467768s" podCreationTimestamp="2026-01-28 15:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:18:23.957122979 +0000 UTC m=+1021.730738017" watchObservedRunningTime="2026-01-28 15:18:23.961467768 +0000 UTC m=+1021.735082796" Jan 28 15:18:23 crc kubenswrapper[4893]: I0128 15:18:23.984458 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-nlv8z" podStartSLOduration=2.279137822 podStartE2EDuration="13.984437653s" podCreationTimestamp="2026-01-28 15:18:10 +0000 UTC" firstStartedPulling="2026-01-28 15:18:11.123799978 +0000 UTC m=+1008.897415006" lastFinishedPulling="2026-01-28 15:18:22.829099799 +0000 UTC m=+1020.602714837" observedRunningTime="2026-01-28 15:18:23.981050749 +0000 UTC m=+1021.754665797" watchObservedRunningTime="2026-01-28 15:18:23.984437653 +0000 UTC m=+1021.758052681" Jan 28 15:18:24 crc kubenswrapper[4893]: I0128 15:18:24.031763 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vscg2"] Jan 28 15:18:24 crc kubenswrapper[4893]: I0128 15:18:24.931051 4893 generic.go:334] "Generic (PLEG): container finished" podID="6044f198-2b5c-4b77-be7b-7406b7287ff0" containerID="2659d792f7eb46f5b065fce8040a9f896140daafa0abe718745fa92df34b5982" exitCode=0 Jan 28 15:18:24 crc kubenswrapper[4893]: I0128 15:18:24.931224 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vscg2" event={"ID":"6044f198-2b5c-4b77-be7b-7406b7287ff0","Type":"ContainerDied","Data":"2659d792f7eb46f5b065fce8040a9f896140daafa0abe718745fa92df34b5982"} Jan 28 15:18:24 crc kubenswrapper[4893]: I0128 15:18:24.932222 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vscg2" event={"ID":"6044f198-2b5c-4b77-be7b-7406b7287ff0","Type":"ContainerStarted","Data":"bc27d1e35f1494680f3c59579045f451bcc8f5ead97b4305a2cbfb6233e779d6"} Jan 28 15:18:25 crc kubenswrapper[4893]: I0128 15:18:25.940688 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vscg2" event={"ID":"6044f198-2b5c-4b77-be7b-7406b7287ff0","Type":"ContainerStarted","Data":"91d150c323650a216cc4932986a2e36b378abed83090bb22e893beb907aceb6b"} Jan 28 15:18:26 crc kubenswrapper[4893]: I0128 15:18:26.949851 4893 generic.go:334] "Generic (PLEG): container finished" podID="6044f198-2b5c-4b77-be7b-7406b7287ff0" containerID="91d150c323650a216cc4932986a2e36b378abed83090bb22e893beb907aceb6b" exitCode=0 Jan 28 15:18:26 crc kubenswrapper[4893]: I0128 15:18:26.949954 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vscg2" event={"ID":"6044f198-2b5c-4b77-be7b-7406b7287ff0","Type":"ContainerDied","Data":"91d150c323650a216cc4932986a2e36b378abed83090bb22e893beb907aceb6b"} Jan 28 15:18:27 crc kubenswrapper[4893]: I0128 15:18:27.962143 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vscg2" event={"ID":"6044f198-2b5c-4b77-be7b-7406b7287ff0","Type":"ContainerStarted","Data":"d18fcbdef5fd65aae55a6ba98d709e70217f8203675b95a7cd49ca1784141198"} Jan 28 15:18:28 crc kubenswrapper[4893]: I0128 15:18:28.014194 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vscg2" podStartSLOduration=2.444837882 podStartE2EDuration="5.014161213s" podCreationTimestamp="2026-01-28 15:18:23 +0000 UTC" firstStartedPulling="2026-01-28 15:18:24.933457433 +0000 UTC m=+1022.707072461" lastFinishedPulling="2026-01-28 15:18:27.502780764 +0000 UTC m=+1025.276395792" observedRunningTime="2026-01-28 15:18:28.012850487 +0000 UTC m=+1025.786465515" watchObservedRunningTime="2026-01-28 15:18:28.014161213 +0000 UTC m=+1025.787776241" Jan 28 15:18:30 crc kubenswrapper[4893]: I0128 15:18:30.647740 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-nlv8z" Jan 28 15:18:31 crc kubenswrapper[4893]: I0128 15:18:31.528156 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h56ct"] Jan 28 15:18:31 crc kubenswrapper[4893]: I0128 15:18:31.530070 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:31 crc kubenswrapper[4893]: I0128 15:18:31.576025 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h56ct"] Jan 28 15:18:31 crc kubenswrapper[4893]: I0128 15:18:31.592538 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-utilities\") pod \"certified-operators-h56ct\" (UID: \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\") " pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:31 crc kubenswrapper[4893]: I0128 15:18:31.592597 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcsbk\" (UniqueName: \"kubernetes.io/projected/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-kube-api-access-pcsbk\") pod \"certified-operators-h56ct\" (UID: \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\") " pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:31 crc kubenswrapper[4893]: I0128 15:18:31.592716 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-catalog-content\") pod \"certified-operators-h56ct\" (UID: \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\") " pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:31 crc kubenswrapper[4893]: I0128 15:18:31.694365 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-catalog-content\") pod \"certified-operators-h56ct\" (UID: \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\") " pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:31 crc kubenswrapper[4893]: I0128 15:18:31.694456 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-utilities\") pod \"certified-operators-h56ct\" (UID: \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\") " pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:31 crc kubenswrapper[4893]: I0128 15:18:31.694498 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcsbk\" (UniqueName: \"kubernetes.io/projected/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-kube-api-access-pcsbk\") pod \"certified-operators-h56ct\" (UID: \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\") " pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:31 crc kubenswrapper[4893]: I0128 15:18:31.695418 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-catalog-content\") pod \"certified-operators-h56ct\" (UID: \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\") " pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:31 crc kubenswrapper[4893]: I0128 15:18:31.695787 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-utilities\") pod \"certified-operators-h56ct\" (UID: \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\") " pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:31 crc kubenswrapper[4893]: I0128 15:18:31.717368 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcsbk\" (UniqueName: \"kubernetes.io/projected/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-kube-api-access-pcsbk\") pod \"certified-operators-h56ct\" (UID: \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\") " pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:31 crc kubenswrapper[4893]: I0128 15:18:31.900704 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:32 crc kubenswrapper[4893]: I0128 15:18:32.124542 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h56ct"] Jan 28 15:18:32 crc kubenswrapper[4893]: I0128 15:18:32.990177 4893 generic.go:334] "Generic (PLEG): container finished" podID="b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" containerID="978596152dd05163d193bba21936f76f548668410b6c573a0d11c2e525f79836" exitCode=0 Jan 28 15:18:32 crc kubenswrapper[4893]: I0128 15:18:32.990549 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h56ct" event={"ID":"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3","Type":"ContainerDied","Data":"978596152dd05163d193bba21936f76f548668410b6c573a0d11c2e525f79836"} Jan 28 15:18:32 crc kubenswrapper[4893]: I0128 15:18:32.990580 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h56ct" event={"ID":"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3","Type":"ContainerStarted","Data":"35830da30a2c682e0cbe9d381d3d4a4823e0b33caa5187ab113e78229cf0342a"} Jan 28 15:18:33 crc kubenswrapper[4893]: I0128 15:18:33.551502 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:33 crc kubenswrapper[4893]: I0128 15:18:33.551579 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:33 crc kubenswrapper[4893]: I0128 15:18:33.610596 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:34 crc kubenswrapper[4893]: I0128 15:18:34.035040 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:35 crc kubenswrapper[4893]: I0128 15:18:35.006544 4893 generic.go:334] "Generic (PLEG): container finished" podID="b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" containerID="71ed5b4212abd82da1fa56149c7f9391685eee40dd42c1372818469d6389b1d4" exitCode=0 Jan 28 15:18:35 crc kubenswrapper[4893]: I0128 15:18:35.006621 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h56ct" event={"ID":"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3","Type":"ContainerDied","Data":"71ed5b4212abd82da1fa56149c7f9391685eee40dd42c1372818469d6389b1d4"} Jan 28 15:18:35 crc kubenswrapper[4893]: I0128 15:18:35.722409 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:18:35 crc kubenswrapper[4893]: I0128 15:18:35.722505 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:18:36 crc kubenswrapper[4893]: I0128 15:18:36.015075 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h56ct" event={"ID":"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3","Type":"ContainerStarted","Data":"ee3af57662a5d4cd25a814d98c829a21436d5aa193c36317f08de99aa34270cf"} Jan 28 15:18:36 crc kubenswrapper[4893]: I0128 15:18:36.035491 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h56ct" podStartSLOduration=2.555443094 podStartE2EDuration="5.035453421s" podCreationTimestamp="2026-01-28 15:18:31 +0000 UTC" firstStartedPulling="2026-01-28 15:18:32.992560025 +0000 UTC m=+1030.766175053" lastFinishedPulling="2026-01-28 15:18:35.472570352 +0000 UTC m=+1033.246185380" observedRunningTime="2026-01-28 15:18:36.034159806 +0000 UTC m=+1033.807774844" watchObservedRunningTime="2026-01-28 15:18:36.035453421 +0000 UTC m=+1033.809068449" Jan 28 15:18:37 crc kubenswrapper[4893]: I0128 15:18:37.901104 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vscg2"] Jan 28 15:18:37 crc kubenswrapper[4893]: I0128 15:18:37.901659 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vscg2" podUID="6044f198-2b5c-4b77-be7b-7406b7287ff0" containerName="registry-server" containerID="cri-o://d18fcbdef5fd65aae55a6ba98d709e70217f8203675b95a7cd49ca1784141198" gracePeriod=2 Jan 28 15:18:39 crc kubenswrapper[4893]: I0128 15:18:39.040065 4893 generic.go:334] "Generic (PLEG): container finished" podID="6044f198-2b5c-4b77-be7b-7406b7287ff0" containerID="d18fcbdef5fd65aae55a6ba98d709e70217f8203675b95a7cd49ca1784141198" exitCode=0 Jan 28 15:18:39 crc kubenswrapper[4893]: I0128 15:18:39.040176 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vscg2" event={"ID":"6044f198-2b5c-4b77-be7b-7406b7287ff0","Type":"ContainerDied","Data":"d18fcbdef5fd65aae55a6ba98d709e70217f8203675b95a7cd49ca1784141198"} Jan 28 15:18:40 crc kubenswrapper[4893]: I0128 15:18:40.172253 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:40 crc kubenswrapper[4893]: I0128 15:18:40.224099 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6044f198-2b5c-4b77-be7b-7406b7287ff0-catalog-content\") pod \"6044f198-2b5c-4b77-be7b-7406b7287ff0\" (UID: \"6044f198-2b5c-4b77-be7b-7406b7287ff0\") " Jan 28 15:18:40 crc kubenswrapper[4893]: I0128 15:18:40.224251 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6044f198-2b5c-4b77-be7b-7406b7287ff0-utilities\") pod \"6044f198-2b5c-4b77-be7b-7406b7287ff0\" (UID: \"6044f198-2b5c-4b77-be7b-7406b7287ff0\") " Jan 28 15:18:40 crc kubenswrapper[4893]: I0128 15:18:40.224289 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kggvw\" (UniqueName: \"kubernetes.io/projected/6044f198-2b5c-4b77-be7b-7406b7287ff0-kube-api-access-kggvw\") pod \"6044f198-2b5c-4b77-be7b-7406b7287ff0\" (UID: \"6044f198-2b5c-4b77-be7b-7406b7287ff0\") " Jan 28 15:18:40 crc kubenswrapper[4893]: I0128 15:18:40.225664 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6044f198-2b5c-4b77-be7b-7406b7287ff0-utilities" (OuterVolumeSpecName: "utilities") pod "6044f198-2b5c-4b77-be7b-7406b7287ff0" (UID: "6044f198-2b5c-4b77-be7b-7406b7287ff0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:18:40 crc kubenswrapper[4893]: I0128 15:18:40.238249 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6044f198-2b5c-4b77-be7b-7406b7287ff0-kube-api-access-kggvw" (OuterVolumeSpecName: "kube-api-access-kggvw") pod "6044f198-2b5c-4b77-be7b-7406b7287ff0" (UID: "6044f198-2b5c-4b77-be7b-7406b7287ff0"). InnerVolumeSpecName "kube-api-access-kggvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:18:40 crc kubenswrapper[4893]: I0128 15:18:40.284033 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6044f198-2b5c-4b77-be7b-7406b7287ff0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6044f198-2b5c-4b77-be7b-7406b7287ff0" (UID: "6044f198-2b5c-4b77-be7b-7406b7287ff0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:18:40 crc kubenswrapper[4893]: I0128 15:18:40.326217 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6044f198-2b5c-4b77-be7b-7406b7287ff0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:18:40 crc kubenswrapper[4893]: I0128 15:18:40.326578 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6044f198-2b5c-4b77-be7b-7406b7287ff0-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:18:40 crc kubenswrapper[4893]: I0128 15:18:40.326594 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kggvw\" (UniqueName: \"kubernetes.io/projected/6044f198-2b5c-4b77-be7b-7406b7287ff0-kube-api-access-kggvw\") on node \"crc\" DevicePath \"\"" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.057117 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vscg2" event={"ID":"6044f198-2b5c-4b77-be7b-7406b7287ff0","Type":"ContainerDied","Data":"bc27d1e35f1494680f3c59579045f451bcc8f5ead97b4305a2cbfb6233e779d6"} Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.057189 4893 scope.go:117] "RemoveContainer" containerID="d18fcbdef5fd65aae55a6ba98d709e70217f8203675b95a7cd49ca1784141198" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.057232 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vscg2" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.083324 4893 scope.go:117] "RemoveContainer" containerID="91d150c323650a216cc4932986a2e36b378abed83090bb22e893beb907aceb6b" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.086535 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vscg2"] Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.090525 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vscg2"] Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.100109 4893 scope.go:117] "RemoveContainer" containerID="2659d792f7eb46f5b065fce8040a9f896140daafa0abe718745fa92df34b5982" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.317755 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8dszf"] Jan 28 15:18:41 crc kubenswrapper[4893]: E0128 15:18:41.318285 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6044f198-2b5c-4b77-be7b-7406b7287ff0" containerName="extract-content" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.318303 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6044f198-2b5c-4b77-be7b-7406b7287ff0" containerName="extract-content" Jan 28 15:18:41 crc kubenswrapper[4893]: E0128 15:18:41.318324 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6044f198-2b5c-4b77-be7b-7406b7287ff0" containerName="extract-utilities" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.318345 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6044f198-2b5c-4b77-be7b-7406b7287ff0" containerName="extract-utilities" Jan 28 15:18:41 crc kubenswrapper[4893]: E0128 15:18:41.318381 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6044f198-2b5c-4b77-be7b-7406b7287ff0" containerName="registry-server" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.318392 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6044f198-2b5c-4b77-be7b-7406b7287ff0" containerName="registry-server" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.318624 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6044f198-2b5c-4b77-be7b-7406b7287ff0" containerName="registry-server" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.322984 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8dszf" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.326000 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8dszf"] Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.327791 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.327923 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.328668 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-h9bxj" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.342729 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqb6w\" (UniqueName: \"kubernetes.io/projected/1e49f4d1-1856-44a5-91a5-86833c5e9e0c-kube-api-access-jqb6w\") pod \"openstack-operator-index-8dszf\" (UID: \"1e49f4d1-1856-44a5-91a5-86833c5e9e0c\") " pod="openstack-operators/openstack-operator-index-8dszf" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.444425 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqb6w\" (UniqueName: \"kubernetes.io/projected/1e49f4d1-1856-44a5-91a5-86833c5e9e0c-kube-api-access-jqb6w\") pod \"openstack-operator-index-8dszf\" (UID: \"1e49f4d1-1856-44a5-91a5-86833c5e9e0c\") " pod="openstack-operators/openstack-operator-index-8dszf" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.470888 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqb6w\" (UniqueName: \"kubernetes.io/projected/1e49f4d1-1856-44a5-91a5-86833c5e9e0c-kube-api-access-jqb6w\") pod \"openstack-operator-index-8dszf\" (UID: \"1e49f4d1-1856-44a5-91a5-86833c5e9e0c\") " pod="openstack-operators/openstack-operator-index-8dszf" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.658598 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8dszf" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.859680 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8dszf"] Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.902664 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.905752 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:41 crc kubenswrapper[4893]: I0128 15:18:41.956562 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:42 crc kubenswrapper[4893]: I0128 15:18:42.066035 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8dszf" event={"ID":"1e49f4d1-1856-44a5-91a5-86833c5e9e0c","Type":"ContainerStarted","Data":"d7e8188551d35880c39093bf3fb15b597db92a9b3e40139fb3c4a3fdee17e3eb"} Jan 28 15:18:42 crc kubenswrapper[4893]: I0128 15:18:42.106893 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:42 crc kubenswrapper[4893]: I0128 15:18:42.903085 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6044f198-2b5c-4b77-be7b-7406b7287ff0" path="/var/lib/kubelet/pods/6044f198-2b5c-4b77-be7b-7406b7287ff0/volumes" Jan 28 15:18:45 crc kubenswrapper[4893]: I0128 15:18:45.087908 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8dszf" event={"ID":"1e49f4d1-1856-44a5-91a5-86833c5e9e0c","Type":"ContainerStarted","Data":"7f136b47c812ec4cf7b82952b3c46bfd3909d83f825fb6b1ca4a87f2b8ddfd72"} Jan 28 15:18:45 crc kubenswrapper[4893]: I0128 15:18:45.116771 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8dszf" podStartSLOduration=1.826717854 podStartE2EDuration="4.116751524s" podCreationTimestamp="2026-01-28 15:18:41 +0000 UTC" firstStartedPulling="2026-01-28 15:18:41.868220584 +0000 UTC m=+1039.641835612" lastFinishedPulling="2026-01-28 15:18:44.158254254 +0000 UTC m=+1041.931869282" observedRunningTime="2026-01-28 15:18:45.11587524 +0000 UTC m=+1042.889490278" watchObservedRunningTime="2026-01-28 15:18:45.116751524 +0000 UTC m=+1042.890366552" Jan 28 15:18:46 crc kubenswrapper[4893]: I0128 15:18:46.707922 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h56ct"] Jan 28 15:18:46 crc kubenswrapper[4893]: I0128 15:18:46.708679 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h56ct" podUID="b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" containerName="registry-server" containerID="cri-o://ee3af57662a5d4cd25a814d98c829a21436d5aa193c36317f08de99aa34270cf" gracePeriod=2 Jan 28 15:18:47 crc kubenswrapper[4893]: I0128 15:18:47.116836 4893 generic.go:334] "Generic (PLEG): container finished" podID="b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" containerID="ee3af57662a5d4cd25a814d98c829a21436d5aa193c36317f08de99aa34270cf" exitCode=0 Jan 28 15:18:47 crc kubenswrapper[4893]: I0128 15:18:47.116887 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h56ct" event={"ID":"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3","Type":"ContainerDied","Data":"ee3af57662a5d4cd25a814d98c829a21436d5aa193c36317f08de99aa34270cf"} Jan 28 15:18:47 crc kubenswrapper[4893]: I0128 15:18:47.158993 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:47 crc kubenswrapper[4893]: I0128 15:18:47.338910 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-catalog-content\") pod \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\" (UID: \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\") " Jan 28 15:18:47 crc kubenswrapper[4893]: I0128 15:18:47.339075 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-utilities\") pod \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\" (UID: \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\") " Jan 28 15:18:47 crc kubenswrapper[4893]: I0128 15:18:47.339168 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcsbk\" (UniqueName: \"kubernetes.io/projected/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-kube-api-access-pcsbk\") pod \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\" (UID: \"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3\") " Jan 28 15:18:47 crc kubenswrapper[4893]: I0128 15:18:47.340098 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-utilities" (OuterVolumeSpecName: "utilities") pod "b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" (UID: "b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:18:47 crc kubenswrapper[4893]: I0128 15:18:47.347698 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-kube-api-access-pcsbk" (OuterVolumeSpecName: "kube-api-access-pcsbk") pod "b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" (UID: "b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3"). InnerVolumeSpecName "kube-api-access-pcsbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:18:47 crc kubenswrapper[4893]: I0128 15:18:47.389808 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" (UID: "b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:18:47 crc kubenswrapper[4893]: I0128 15:18:47.440750 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:18:47 crc kubenswrapper[4893]: I0128 15:18:47.440783 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcsbk\" (UniqueName: \"kubernetes.io/projected/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-kube-api-access-pcsbk\") on node \"crc\" DevicePath \"\"" Jan 28 15:18:47 crc kubenswrapper[4893]: I0128 15:18:47.440794 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:18:48 crc kubenswrapper[4893]: I0128 15:18:48.127054 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h56ct" event={"ID":"b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3","Type":"ContainerDied","Data":"35830da30a2c682e0cbe9d381d3d4a4823e0b33caa5187ab113e78229cf0342a"} Jan 28 15:18:48 crc kubenswrapper[4893]: I0128 15:18:48.127102 4893 scope.go:117] "RemoveContainer" containerID="ee3af57662a5d4cd25a814d98c829a21436d5aa193c36317f08de99aa34270cf" Jan 28 15:18:48 crc kubenswrapper[4893]: I0128 15:18:48.127174 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h56ct" Jan 28 15:18:48 crc kubenswrapper[4893]: I0128 15:18:48.144584 4893 scope.go:117] "RemoveContainer" containerID="71ed5b4212abd82da1fa56149c7f9391685eee40dd42c1372818469d6389b1d4" Jan 28 15:18:48 crc kubenswrapper[4893]: I0128 15:18:48.157449 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h56ct"] Jan 28 15:18:48 crc kubenswrapper[4893]: I0128 15:18:48.163020 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h56ct"] Jan 28 15:18:48 crc kubenswrapper[4893]: I0128 15:18:48.176851 4893 scope.go:117] "RemoveContainer" containerID="978596152dd05163d193bba21936f76f548668410b6c573a0d11c2e525f79836" Jan 28 15:18:48 crc kubenswrapper[4893]: I0128 15:18:48.901248 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" path="/var/lib/kubelet/pods/b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3/volumes" Jan 28 15:18:51 crc kubenswrapper[4893]: I0128 15:18:51.659107 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8dszf" Jan 28 15:18:51 crc kubenswrapper[4893]: I0128 15:18:51.659463 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8dszf" Jan 28 15:18:51 crc kubenswrapper[4893]: I0128 15:18:51.683802 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8dszf" Jan 28 15:18:52 crc kubenswrapper[4893]: I0128 15:18:52.183435 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8dszf" Jan 28 15:18:58 crc kubenswrapper[4893]: I0128 15:18:58.959664 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg"] Jan 28 15:18:58 crc kubenswrapper[4893]: E0128 15:18:58.960317 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" containerName="extract-content" Jan 28 15:18:58 crc kubenswrapper[4893]: I0128 15:18:58.960332 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" containerName="extract-content" Jan 28 15:18:58 crc kubenswrapper[4893]: E0128 15:18:58.960345 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" containerName="registry-server" Jan 28 15:18:58 crc kubenswrapper[4893]: I0128 15:18:58.960352 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" containerName="registry-server" Jan 28 15:18:58 crc kubenswrapper[4893]: E0128 15:18:58.960366 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" containerName="extract-utilities" Jan 28 15:18:58 crc kubenswrapper[4893]: I0128 15:18:58.960375 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" containerName="extract-utilities" Jan 28 15:18:58 crc kubenswrapper[4893]: I0128 15:18:58.960535 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c377a7-96a7-4168-9efd-8ed5c8c5a3d3" containerName="registry-server" Jan 28 15:18:58 crc kubenswrapper[4893]: I0128 15:18:58.961440 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" Jan 28 15:18:58 crc kubenswrapper[4893]: I0128 15:18:58.963644 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-srhb4" Jan 28 15:18:58 crc kubenswrapper[4893]: I0128 15:18:58.970699 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg"] Jan 28 15:18:59 crc kubenswrapper[4893]: I0128 15:18:59.120085 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-util\") pod \"1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg\" (UID: \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\") " pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" Jan 28 15:18:59 crc kubenswrapper[4893]: I0128 15:18:59.120174 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpzwm\" (UniqueName: \"kubernetes.io/projected/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-kube-api-access-bpzwm\") pod \"1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg\" (UID: \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\") " pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" Jan 28 15:18:59 crc kubenswrapper[4893]: I0128 15:18:59.120427 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-bundle\") pod \"1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg\" (UID: \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\") " pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" Jan 28 15:18:59 crc kubenswrapper[4893]: I0128 15:18:59.222260 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-util\") pod \"1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg\" (UID: \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\") " pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" Jan 28 15:18:59 crc kubenswrapper[4893]: I0128 15:18:59.222359 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpzwm\" (UniqueName: \"kubernetes.io/projected/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-kube-api-access-bpzwm\") pod \"1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg\" (UID: \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\") " pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" Jan 28 15:18:59 crc kubenswrapper[4893]: I0128 15:18:59.222436 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-bundle\") pod \"1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg\" (UID: \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\") " pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" Jan 28 15:18:59 crc kubenswrapper[4893]: I0128 15:18:59.223129 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-bundle\") pod \"1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg\" (UID: \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\") " pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" Jan 28 15:18:59 crc kubenswrapper[4893]: I0128 15:18:59.223294 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-util\") pod \"1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg\" (UID: \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\") " pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" Jan 28 15:18:59 crc kubenswrapper[4893]: I0128 15:18:59.242621 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpzwm\" (UniqueName: \"kubernetes.io/projected/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-kube-api-access-bpzwm\") pod \"1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg\" (UID: \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\") " pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" Jan 28 15:18:59 crc kubenswrapper[4893]: I0128 15:18:59.324263 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" Jan 28 15:18:59 crc kubenswrapper[4893]: I0128 15:18:59.765748 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg"] Jan 28 15:19:00 crc kubenswrapper[4893]: I0128 15:19:00.210158 4893 generic.go:334] "Generic (PLEG): container finished" podID="8cb55e6c-bd6a-496e-a2bd-85b72cfb8146" containerID="e9ccbefa8b3d76a82993ad800d71e7eaea3d1c512ed6618d5f0fe2d6f8a06221" exitCode=0 Jan 28 15:19:00 crc kubenswrapper[4893]: I0128 15:19:00.210408 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" event={"ID":"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146","Type":"ContainerDied","Data":"e9ccbefa8b3d76a82993ad800d71e7eaea3d1c512ed6618d5f0fe2d6f8a06221"} Jan 28 15:19:00 crc kubenswrapper[4893]: I0128 15:19:00.210779 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" event={"ID":"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146","Type":"ContainerStarted","Data":"0971253bb1be964c84344492999a6a1e072d0edabf2cec1bd56c135603e09666"} Jan 28 15:19:01 crc kubenswrapper[4893]: I0128 15:19:01.221678 4893 generic.go:334] "Generic (PLEG): container finished" podID="8cb55e6c-bd6a-496e-a2bd-85b72cfb8146" containerID="00132eb29c28527ade71bd84328fb8fd8c8234ca6c305d095e39d1ca52308598" exitCode=0 Jan 28 15:19:01 crc kubenswrapper[4893]: I0128 15:19:01.221736 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" event={"ID":"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146","Type":"ContainerDied","Data":"00132eb29c28527ade71bd84328fb8fd8c8234ca6c305d095e39d1ca52308598"} Jan 28 15:19:02 crc kubenswrapper[4893]: I0128 15:19:02.232847 4893 generic.go:334] "Generic (PLEG): container finished" podID="8cb55e6c-bd6a-496e-a2bd-85b72cfb8146" containerID="2ecebd71c3de99516db7c70ce5eec833229f1ecc80e3c04132ed3f45c3367da2" exitCode=0 Jan 28 15:19:02 crc kubenswrapper[4893]: I0128 15:19:02.232914 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" event={"ID":"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146","Type":"ContainerDied","Data":"2ecebd71c3de99516db7c70ce5eec833229f1ecc80e3c04132ed3f45c3367da2"} Jan 28 15:19:03 crc kubenswrapper[4893]: I0128 15:19:03.566401 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" Jan 28 15:19:03 crc kubenswrapper[4893]: I0128 15:19:03.686012 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-util\") pod \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\" (UID: \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\") " Jan 28 15:19:03 crc kubenswrapper[4893]: I0128 15:19:03.686075 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-bundle\") pod \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\" (UID: \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\") " Jan 28 15:19:03 crc kubenswrapper[4893]: I0128 15:19:03.686125 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpzwm\" (UniqueName: \"kubernetes.io/projected/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-kube-api-access-bpzwm\") pod \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\" (UID: \"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146\") " Jan 28 15:19:03 crc kubenswrapper[4893]: I0128 15:19:03.686976 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-bundle" (OuterVolumeSpecName: "bundle") pod "8cb55e6c-bd6a-496e-a2bd-85b72cfb8146" (UID: "8cb55e6c-bd6a-496e-a2bd-85b72cfb8146"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:03 crc kubenswrapper[4893]: I0128 15:19:03.692142 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-kube-api-access-bpzwm" (OuterVolumeSpecName: "kube-api-access-bpzwm") pod "8cb55e6c-bd6a-496e-a2bd-85b72cfb8146" (UID: "8cb55e6c-bd6a-496e-a2bd-85b72cfb8146"). InnerVolumeSpecName "kube-api-access-bpzwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:19:03 crc kubenswrapper[4893]: I0128 15:19:03.700861 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-util" (OuterVolumeSpecName: "util") pod "8cb55e6c-bd6a-496e-a2bd-85b72cfb8146" (UID: "8cb55e6c-bd6a-496e-a2bd-85b72cfb8146"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:19:03 crc kubenswrapper[4893]: I0128 15:19:03.787132 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpzwm\" (UniqueName: \"kubernetes.io/projected/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-kube-api-access-bpzwm\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:03 crc kubenswrapper[4893]: I0128 15:19:03.787174 4893 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:03 crc kubenswrapper[4893]: I0128 15:19:03.787189 4893 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8cb55e6c-bd6a-496e-a2bd-85b72cfb8146-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:19:04 crc kubenswrapper[4893]: I0128 15:19:04.260347 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" event={"ID":"8cb55e6c-bd6a-496e-a2bd-85b72cfb8146","Type":"ContainerDied","Data":"0971253bb1be964c84344492999a6a1e072d0edabf2cec1bd56c135603e09666"} Jan 28 15:19:04 crc kubenswrapper[4893]: I0128 15:19:04.260634 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0971253bb1be964c84344492999a6a1e072d0edabf2cec1bd56c135603e09666" Jan 28 15:19:04 crc kubenswrapper[4893]: I0128 15:19:04.260778 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg" Jan 28 15:19:05 crc kubenswrapper[4893]: I0128 15:19:05.722923 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:19:05 crc kubenswrapper[4893]: I0128 15:19:05.723334 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:19:05 crc kubenswrapper[4893]: I0128 15:19:05.723389 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:19:05 crc kubenswrapper[4893]: I0128 15:19:05.724125 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5fb5a1f3773928c39eda437a9e56f4ecca599067083a7fd3baff85989507ed7"} pod="openshift-machine-config-operator/machine-config-daemon-l2nht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:19:05 crc kubenswrapper[4893]: I0128 15:19:05.724198 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" containerID="cri-o://e5fb5a1f3773928c39eda437a9e56f4ecca599067083a7fd3baff85989507ed7" gracePeriod=600 Jan 28 15:19:06 crc kubenswrapper[4893]: I0128 15:19:06.287288 4893 generic.go:334] "Generic (PLEG): container finished" podID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerID="e5fb5a1f3773928c39eda437a9e56f4ecca599067083a7fd3baff85989507ed7" exitCode=0 Jan 28 15:19:06 crc kubenswrapper[4893]: I0128 15:19:06.287383 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerDied","Data":"e5fb5a1f3773928c39eda437a9e56f4ecca599067083a7fd3baff85989507ed7"} Jan 28 15:19:06 crc kubenswrapper[4893]: I0128 15:19:06.287677 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerStarted","Data":"eaa47c5c31906ab74e7bc044988a1088092bc8e70af984b1414760728f1c9f6e"} Jan 28 15:19:06 crc kubenswrapper[4893]: I0128 15:19:06.287704 4893 scope.go:117] "RemoveContainer" containerID="3ced3cae3c54b613ab0ce9fe21bab2e1babeff0d0bb895261d140f95238422f3" Jan 28 15:19:08 crc kubenswrapper[4893]: I0128 15:19:08.787855 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg"] Jan 28 15:19:08 crc kubenswrapper[4893]: E0128 15:19:08.788537 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb55e6c-bd6a-496e-a2bd-85b72cfb8146" containerName="pull" Jan 28 15:19:08 crc kubenswrapper[4893]: I0128 15:19:08.788555 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb55e6c-bd6a-496e-a2bd-85b72cfb8146" containerName="pull" Jan 28 15:19:08 crc kubenswrapper[4893]: E0128 15:19:08.788569 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb55e6c-bd6a-496e-a2bd-85b72cfb8146" containerName="extract" Jan 28 15:19:08 crc kubenswrapper[4893]: I0128 15:19:08.788812 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb55e6c-bd6a-496e-a2bd-85b72cfb8146" containerName="extract" Jan 28 15:19:08 crc kubenswrapper[4893]: E0128 15:19:08.788846 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb55e6c-bd6a-496e-a2bd-85b72cfb8146" containerName="util" Jan 28 15:19:08 crc kubenswrapper[4893]: I0128 15:19:08.788854 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb55e6c-bd6a-496e-a2bd-85b72cfb8146" containerName="util" Jan 28 15:19:08 crc kubenswrapper[4893]: I0128 15:19:08.789004 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb55e6c-bd6a-496e-a2bd-85b72cfb8146" containerName="extract" Jan 28 15:19:08 crc kubenswrapper[4893]: I0128 15:19:08.789515 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" Jan 28 15:19:08 crc kubenswrapper[4893]: I0128 15:19:08.792703 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-k45f8" Jan 28 15:19:08 crc kubenswrapper[4893]: I0128 15:19:08.820367 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg"] Jan 28 15:19:08 crc kubenswrapper[4893]: I0128 15:19:08.860724 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mscz9\" (UniqueName: \"kubernetes.io/projected/2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1-kube-api-access-mscz9\") pod \"openstack-operator-controller-init-6cdf9dd67-8gqfg\" (UID: \"2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1\") " pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" Jan 28 15:19:08 crc kubenswrapper[4893]: I0128 15:19:08.962234 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mscz9\" (UniqueName: \"kubernetes.io/projected/2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1-kube-api-access-mscz9\") pod \"openstack-operator-controller-init-6cdf9dd67-8gqfg\" (UID: \"2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1\") " pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" Jan 28 15:19:08 crc kubenswrapper[4893]: I0128 15:19:08.991441 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mscz9\" (UniqueName: \"kubernetes.io/projected/2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1-kube-api-access-mscz9\") pod \"openstack-operator-controller-init-6cdf9dd67-8gqfg\" (UID: \"2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1\") " pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" Jan 28 15:19:09 crc kubenswrapper[4893]: I0128 15:19:09.105523 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" Jan 28 15:19:09 crc kubenswrapper[4893]: I0128 15:19:09.552729 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg"] Jan 28 15:19:10 crc kubenswrapper[4893]: I0128 15:19:10.318423 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" event={"ID":"2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1","Type":"ContainerStarted","Data":"b3953e60d7bba752666f09fe03db4c2956e1315163b9f38c7b1d87159bf8c68a"} Jan 28 15:19:14 crc kubenswrapper[4893]: I0128 15:19:14.350883 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" event={"ID":"2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1","Type":"ContainerStarted","Data":"3003fd0eea2feb9d34df73fe20ac1d0c577d38d6a4ccff8e130363ca9ca27033"} Jan 28 15:19:14 crc kubenswrapper[4893]: I0128 15:19:14.351453 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" Jan 28 15:19:14 crc kubenswrapper[4893]: I0128 15:19:14.377935 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" podStartSLOduration=2.190488538 podStartE2EDuration="6.377914831s" podCreationTimestamp="2026-01-28 15:19:08 +0000 UTC" firstStartedPulling="2026-01-28 15:19:09.559696859 +0000 UTC m=+1067.333311887" lastFinishedPulling="2026-01-28 15:19:13.747123152 +0000 UTC m=+1071.520738180" observedRunningTime="2026-01-28 15:19:14.375939148 +0000 UTC m=+1072.149554186" watchObservedRunningTime="2026-01-28 15:19:14.377914831 +0000 UTC m=+1072.151529859" Jan 28 15:19:19 crc kubenswrapper[4893]: I0128 15:19:19.110019 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.327942 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-p6nxj"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.329083 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p6nxj" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.331458 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9h8sj" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.343959 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vdcjn"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.344860 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vdcjn" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.346728 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-nsspq" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.355032 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-p6nxj"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.368335 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vdcjn"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.389575 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-jnhg7"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.390732 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jnhg7" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.407968 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-7f789" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.423852 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8cp8\" (UniqueName: \"kubernetes.io/projected/72d2e324-70de-4019-9673-0a86620ca028-kube-api-access-g8cp8\") pod \"cinder-operator-controller-manager-7478f7dbf9-vdcjn\" (UID: \"72d2e324-70de-4019-9673-0a86620ca028\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vdcjn" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.423973 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzvt9\" (UniqueName: \"kubernetes.io/projected/c2188ba2-ad62-4873-abfe-fa7ad88b57a6-kube-api-access-hzvt9\") pod \"barbican-operator-controller-manager-7f86f8796f-p6nxj\" (UID: \"c2188ba2-ad62-4873-abfe-fa7ad88b57a6\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p6nxj" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.424214 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr6rx\" (UniqueName: \"kubernetes.io/projected/17019a37-b628-4464-b037-470c2be80308-kube-api-access-dr6rx\") pod \"designate-operator-controller-manager-b45d7bf98-jnhg7\" (UID: \"17019a37-b628-4464-b037-470c2be80308\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jnhg7" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.449443 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.460804 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.465247 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-52fs8" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.469348 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-jnhg7"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.499243 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.500814 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-j8x44"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.503912 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j8x44" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.509952 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-jfmdx" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.518691 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dqldg"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.519570 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dqldg" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.522965 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4ztdj" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.525873 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8cp8\" (UniqueName: \"kubernetes.io/projected/72d2e324-70de-4019-9673-0a86620ca028-kube-api-access-g8cp8\") pod \"cinder-operator-controller-manager-7478f7dbf9-vdcjn\" (UID: \"72d2e324-70de-4019-9673-0a86620ca028\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vdcjn" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.525949 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzvt9\" (UniqueName: \"kubernetes.io/projected/c2188ba2-ad62-4873-abfe-fa7ad88b57a6-kube-api-access-hzvt9\") pod \"barbican-operator-controller-manager-7f86f8796f-p6nxj\" (UID: \"c2188ba2-ad62-4873-abfe-fa7ad88b57a6\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p6nxj" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.525996 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr6rx\" (UniqueName: \"kubernetes.io/projected/17019a37-b628-4464-b037-470c2be80308-kube-api-access-dr6rx\") pod \"designate-operator-controller-manager-b45d7bf98-jnhg7\" (UID: \"17019a37-b628-4464-b037-470c2be80308\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jnhg7" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.533668 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-j8x44"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.539409 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-rg997"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.540562 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.543327 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.546872 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dqldg"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.556307 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nldp8" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.557447 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-jfx6g"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.558336 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jfx6g" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.560819 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-8wgwf" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.588643 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-jfx6g"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.598728 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-rg997"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.606305 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzvt9\" (UniqueName: \"kubernetes.io/projected/c2188ba2-ad62-4873-abfe-fa7ad88b57a6-kube-api-access-hzvt9\") pod \"barbican-operator-controller-manager-7f86f8796f-p6nxj\" (UID: \"c2188ba2-ad62-4873-abfe-fa7ad88b57a6\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p6nxj" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.608501 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr6rx\" (UniqueName: \"kubernetes.io/projected/17019a37-b628-4464-b037-470c2be80308-kube-api-access-dr6rx\") pod \"designate-operator-controller-manager-b45d7bf98-jnhg7\" (UID: \"17019a37-b628-4464-b037-470c2be80308\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jnhg7" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.608998 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8cp8\" (UniqueName: \"kubernetes.io/projected/72d2e324-70de-4019-9673-0a86620ca028-kube-api-access-g8cp8\") pod \"cinder-operator-controller-manager-7478f7dbf9-vdcjn\" (UID: \"72d2e324-70de-4019-9673-0a86620ca028\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vdcjn" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.611590 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.612497 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.619856 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-trmxz" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.626858 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j4vk\" (UniqueName: \"kubernetes.io/projected/0e525c35-621a-43f8-a8c6-9a472607373d-kube-api-access-9j4vk\") pod \"heat-operator-controller-manager-594c8c9d5d-j8x44\" (UID: \"0e525c35-621a-43f8-a8c6-9a472607373d\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j8x44" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.627221 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgq2r\" (UniqueName: \"kubernetes.io/projected/4179ac2f-dd41-4cd3-8558-6daba8252582-kube-api-access-tgq2r\") pod \"glance-operator-controller-manager-78fdd796fd-dlrsm\" (UID: \"4179ac2f-dd41-4cd3-8558-6daba8252582\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.627290 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcrbh\" (UniqueName: \"kubernetes.io/projected/0dcd4cb9-92c5-4fb0-9718-79fe6b7d2cea-kube-api-access-zcrbh\") pod \"horizon-operator-controller-manager-77d5c5b54f-dqldg\" (UID: \"0dcd4cb9-92c5-4fb0-9718-79fe6b7d2cea\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dqldg" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.636733 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.638060 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.646451 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-9c9cq" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.651511 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p6nxj" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.663964 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.664374 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vdcjn" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.683238 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.686527 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qbfns"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.688454 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qbfns" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.696941 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-fpjnl" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.715198 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qbfns"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.723783 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jnhg7" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.733191 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert\") pod \"infra-operator-controller-manager-694cf4f878-rg997\" (UID: \"1a360ec7-efa3-4972-a655-3e21de960aec\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.733271 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcrbh\" (UniqueName: \"kubernetes.io/projected/0dcd4cb9-92c5-4fb0-9718-79fe6b7d2cea-kube-api-access-zcrbh\") pod \"horizon-operator-controller-manager-77d5c5b54f-dqldg\" (UID: \"0dcd4cb9-92c5-4fb0-9718-79fe6b7d2cea\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dqldg" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.733313 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j4vk\" (UniqueName: \"kubernetes.io/projected/0e525c35-621a-43f8-a8c6-9a472607373d-kube-api-access-9j4vk\") pod \"heat-operator-controller-manager-594c8c9d5d-j8x44\" (UID: \"0e525c35-621a-43f8-a8c6-9a472607373d\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j8x44" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.733377 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvlx6\" (UniqueName: \"kubernetes.io/projected/20c9ab96-9196-4834-b516-8d1c9564bf35-kube-api-access-tvlx6\") pod \"ironic-operator-controller-manager-598f7747c9-jfx6g\" (UID: \"20c9ab96-9196-4834-b516-8d1c9564bf35\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jfx6g" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.733410 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgq2r\" (UniqueName: \"kubernetes.io/projected/4179ac2f-dd41-4cd3-8558-6daba8252582-kube-api-access-tgq2r\") pod \"glance-operator-controller-manager-78fdd796fd-dlrsm\" (UID: \"4179ac2f-dd41-4cd3-8558-6daba8252582\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.733446 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzzr9\" (UniqueName: \"kubernetes.io/projected/1a360ec7-efa3-4972-a655-3e21de960aec-kube-api-access-pzzr9\") pod \"infra-operator-controller-manager-694cf4f878-rg997\" (UID: \"1a360ec7-efa3-4972-a655-3e21de960aec\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.733557 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcdrg\" (UniqueName: \"kubernetes.io/projected/7740f64d-b660-493b-b3f5-1041a0ce3061-kube-api-access-fcdrg\") pod \"keystone-operator-controller-manager-b8b6d4659-4rgm2\" (UID: \"7740f64d-b660-493b-b3f5-1041a0ce3061\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.743720 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.745021 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.750909 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-s9t4w" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.759415 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.811337 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgq2r\" (UniqueName: \"kubernetes.io/projected/4179ac2f-dd41-4cd3-8558-6daba8252582-kube-api-access-tgq2r\") pod \"glance-operator-controller-manager-78fdd796fd-dlrsm\" (UID: \"4179ac2f-dd41-4cd3-8558-6daba8252582\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.813386 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j4vk\" (UniqueName: \"kubernetes.io/projected/0e525c35-621a-43f8-a8c6-9a472607373d-kube-api-access-9j4vk\") pod \"heat-operator-controller-manager-594c8c9d5d-j8x44\" (UID: \"0e525c35-621a-43f8-a8c6-9a472607373d\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j8x44" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.832062 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcrbh\" (UniqueName: \"kubernetes.io/projected/0dcd4cb9-92c5-4fb0-9718-79fe6b7d2cea-kube-api-access-zcrbh\") pod \"horizon-operator-controller-manager-77d5c5b54f-dqldg\" (UID: \"0dcd4cb9-92c5-4fb0-9718-79fe6b7d2cea\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dqldg" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.847904 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j8x44" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.848247 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvlx6\" (UniqueName: \"kubernetes.io/projected/20c9ab96-9196-4834-b516-8d1c9564bf35-kube-api-access-tvlx6\") pod \"ironic-operator-controller-manager-598f7747c9-jfx6g\" (UID: \"20c9ab96-9196-4834-b516-8d1c9564bf35\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jfx6g" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.848313 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzzr9\" (UniqueName: \"kubernetes.io/projected/1a360ec7-efa3-4972-a655-3e21de960aec-kube-api-access-pzzr9\") pod \"infra-operator-controller-manager-694cf4f878-rg997\" (UID: \"1a360ec7-efa3-4972-a655-3e21de960aec\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.848343 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84s5q\" (UniqueName: \"kubernetes.io/projected/d578cfaa-0b09-476e-9cd0-abd3d6274bd7-kube-api-access-84s5q\") pod \"manila-operator-controller-manager-78c6999f6f-nd8rm\" (UID: \"d578cfaa-0b09-476e-9cd0-abd3d6274bd7\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.848392 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcdrg\" (UniqueName: \"kubernetes.io/projected/7740f64d-b660-493b-b3f5-1041a0ce3061-kube-api-access-fcdrg\") pod \"keystone-operator-controller-manager-b8b6d4659-4rgm2\" (UID: \"7740f64d-b660-493b-b3f5-1041a0ce3061\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.848425 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7lkz\" (UniqueName: \"kubernetes.io/projected/a5872ed3-9a06-4bd2-b592-b42c548a1db4-kube-api-access-g7lkz\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-qbfns\" (UID: \"a5872ed3-9a06-4bd2-b592-b42c548a1db4\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qbfns" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.848458 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert\") pod \"infra-operator-controller-manager-694cf4f878-rg997\" (UID: \"1a360ec7-efa3-4972-a655-3e21de960aec\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:19:39 crc kubenswrapper[4893]: E0128 15:19:39.848637 4893 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:39 crc kubenswrapper[4893]: E0128 15:19:39.852173 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert podName:1a360ec7-efa3-4972-a655-3e21de960aec nodeName:}" failed. No retries permitted until 2026-01-28 15:19:40.349311886 +0000 UTC m=+1098.122926914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert") pod "infra-operator-controller-manager-694cf4f878-rg997" (UID: "1a360ec7-efa3-4972-a655-3e21de960aec") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.872729 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzzr9\" (UniqueName: \"kubernetes.io/projected/1a360ec7-efa3-4972-a655-3e21de960aec-kube-api-access-pzzr9\") pod \"infra-operator-controller-manager-694cf4f878-rg997\" (UID: \"1a360ec7-efa3-4972-a655-3e21de960aec\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.872801 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.873820 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.875169 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvlx6\" (UniqueName: \"kubernetes.io/projected/20c9ab96-9196-4834-b516-8d1c9564bf35-kube-api-access-tvlx6\") pod \"ironic-operator-controller-manager-598f7747c9-jfx6g\" (UID: \"20c9ab96-9196-4834-b516-8d1c9564bf35\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jfx6g" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.877713 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-hzg2w" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.877868 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b6cft"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.878971 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b6cft" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.880639 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-krv6t" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.894095 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcdrg\" (UniqueName: \"kubernetes.io/projected/7740f64d-b660-493b-b3f5-1041a0ce3061-kube-api-access-fcdrg\") pod \"keystone-operator-controller-manager-b8b6d4659-4rgm2\" (UID: \"7740f64d-b660-493b-b3f5-1041a0ce3061\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.894170 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.911685 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b6cft"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.935348 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dqldg" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.943926 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.945322 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.945458 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt"] Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.952552 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.970177 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84s5q\" (UniqueName: \"kubernetes.io/projected/d578cfaa-0b09-476e-9cd0-abd3d6274bd7-kube-api-access-84s5q\") pod \"manila-operator-controller-manager-78c6999f6f-nd8rm\" (UID: \"d578cfaa-0b09-476e-9cd0-abd3d6274bd7\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.976782 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-dd4r7" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.978031 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88dh7\" (UniqueName: \"kubernetes.io/projected/e1e458d4-37a1-4111-9e2d-fa49cbdd9e08-kube-api-access-88dh7\") pod \"neutron-operator-controller-manager-78d58447c5-2qgj6\" (UID: \"e1e458d4-37a1-4111-9e2d-fa49cbdd9e08\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6" Jan 28 15:19:39 crc kubenswrapper[4893]: I0128 15:19:39.978164 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7lkz\" (UniqueName: \"kubernetes.io/projected/a5872ed3-9a06-4bd2-b592-b42c548a1db4-kube-api-access-g7lkz\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-qbfns\" (UID: \"a5872ed3-9a06-4bd2-b592-b42c548a1db4\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qbfns" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:39.999698 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-ld4p5"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.002643 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ld4p5" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.004674 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jfx6g" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.005727 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-v2xf7" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.008979 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.024893 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84s5q\" (UniqueName: \"kubernetes.io/projected/d578cfaa-0b09-476e-9cd0-abd3d6274bd7-kube-api-access-84s5q\") pod \"manila-operator-controller-manager-78c6999f6f-nd8rm\" (UID: \"d578cfaa-0b09-476e-9cd0-abd3d6274bd7\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.061784 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.063828 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7lkz\" (UniqueName: \"kubernetes.io/projected/a5872ed3-9a06-4bd2-b592-b42c548a1db4-kube-api-access-g7lkz\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-qbfns\" (UID: \"a5872ed3-9a06-4bd2-b592-b42c548a1db4\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qbfns" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.064380 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.069536 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-b9bs6" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.071789 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-ld4p5"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.075986 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.089101 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.089885 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88dh7\" (UniqueName: \"kubernetes.io/projected/e1e458d4-37a1-4111-9e2d-fa49cbdd9e08-kube-api-access-88dh7\") pod \"neutron-operator-controller-manager-78d58447c5-2qgj6\" (UID: \"e1e458d4-37a1-4111-9e2d-fa49cbdd9e08\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.089950 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hmlh\" (UniqueName: \"kubernetes.io/projected/e130bc9f-0869-42a0-922b-db361e6b26f3-kube-api-access-2hmlh\") pod \"nova-operator-controller-manager-75d84bc6b9-s5v4q\" (UID: \"e130bc9f-0869-42a0-922b-db361e6b26f3\") " pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.089986 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4z2x\" (UniqueName: \"kubernetes.io/projected/379dbcd5-96e3-4563-ac73-7264f4b90d68-kube-api-access-l4z2x\") pod \"octavia-operator-controller-manager-5f4cd88d46-b6cft\" (UID: \"379dbcd5-96e3-4563-ac73-7264f4b90d68\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b6cft" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.090057 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt\" (UID: \"bfe9e7f0-b5aa-48a6-9487-e1765752c644\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.090090 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42wpn\" (UniqueName: \"kubernetes.io/projected/bfe9e7f0-b5aa-48a6-9487-e1765752c644-kube-api-access-42wpn\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt\" (UID: \"bfe9e7f0-b5aa-48a6-9487-e1765752c644\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.096992 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.116321 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qbfns" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.137008 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.153012 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.153205 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.154293 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.157571 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-cp7n2" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.157912 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-vn8d5" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.158580 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88dh7\" (UniqueName: \"kubernetes.io/projected/e1e458d4-37a1-4111-9e2d-fa49cbdd9e08-kube-api-access-88dh7\") pod \"neutron-operator-controller-manager-78d58447c5-2qgj6\" (UID: \"e1e458d4-37a1-4111-9e2d-fa49cbdd9e08\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.158736 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.159694 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.162213 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-tcnnf" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.163946 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.170327 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.170890 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.182408 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.200821 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4z2x\" (UniqueName: \"kubernetes.io/projected/379dbcd5-96e3-4563-ac73-7264f4b90d68-kube-api-access-l4z2x\") pod \"octavia-operator-controller-manager-5f4cd88d46-b6cft\" (UID: \"379dbcd5-96e3-4563-ac73-7264f4b90d68\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b6cft" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.200951 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcmnx\" (UniqueName: \"kubernetes.io/projected/9a867ab9-ad43-409c-9d85-0ef229c5e25f-kube-api-access-rcmnx\") pod \"placement-operator-controller-manager-79d5ccc684-ld4p5\" (UID: \"9a867ab9-ad43-409c-9d85-0ef229c5e25f\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ld4p5" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.200993 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt\" (UID: \"bfe9e7f0-b5aa-48a6-9487-e1765752c644\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.201030 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42wpn\" (UniqueName: \"kubernetes.io/projected/bfe9e7f0-b5aa-48a6-9487-e1765752c644-kube-api-access-42wpn\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt\" (UID: \"bfe9e7f0-b5aa-48a6-9487-e1765752c644\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.201096 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4scj\" (UniqueName: \"kubernetes.io/projected/b70555f3-c876-49fc-bd77-83efa82abac7-kube-api-access-l4scj\") pod \"ovn-operator-controller-manager-6f75f45d54-b276g\" (UID: \"b70555f3-c876-49fc-bd77-83efa82abac7\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.201180 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hmlh\" (UniqueName: \"kubernetes.io/projected/e130bc9f-0869-42a0-922b-db361e6b26f3-kube-api-access-2hmlh\") pod \"nova-operator-controller-manager-75d84bc6b9-s5v4q\" (UID: \"e130bc9f-0869-42a0-922b-db361e6b26f3\") " pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" Jan 28 15:19:40 crc kubenswrapper[4893]: E0128 15:19:40.205403 4893 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:40 crc kubenswrapper[4893]: E0128 15:19:40.205459 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert podName:bfe9e7f0-b5aa-48a6-9487-e1765752c644 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:40.705444784 +0000 UTC m=+1098.479059802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" (UID: "bfe9e7f0-b5aa-48a6-9487-e1765752c644") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.225375 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hmlh\" (UniqueName: \"kubernetes.io/projected/e130bc9f-0869-42a0-922b-db361e6b26f3-kube-api-access-2hmlh\") pod \"nova-operator-controller-manager-75d84bc6b9-s5v4q\" (UID: \"e130bc9f-0869-42a0-922b-db361e6b26f3\") " pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.226278 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42wpn\" (UniqueName: \"kubernetes.io/projected/bfe9e7f0-b5aa-48a6-9487-e1765752c644-kube-api-access-42wpn\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt\" (UID: \"bfe9e7f0-b5aa-48a6-9487-e1765752c644\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.231059 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4z2x\" (UniqueName: \"kubernetes.io/projected/379dbcd5-96e3-4563-ac73-7264f4b90d68-kube-api-access-l4z2x\") pod \"octavia-operator-controller-manager-5f4cd88d46-b6cft\" (UID: \"379dbcd5-96e3-4563-ac73-7264f4b90d68\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b6cft" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.239075 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b6cft" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.285677 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-q9t8p"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.286870 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-q9t8p" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.290536 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9wl9q" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.292874 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-q9t8p"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.306033 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4scj\" (UniqueName: \"kubernetes.io/projected/b70555f3-c876-49fc-bd77-83efa82abac7-kube-api-access-l4scj\") pod \"ovn-operator-controller-manager-6f75f45d54-b276g\" (UID: \"b70555f3-c876-49fc-bd77-83efa82abac7\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.306084 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmkt2\" (UniqueName: \"kubernetes.io/projected/2dee9e4e-11c8-4db6-a457-6f7bbf047f70-kube-api-access-gmkt2\") pod \"telemetry-operator-controller-manager-85cd9769bb-bsh7f\" (UID: \"2dee9e4e-11c8-4db6-a457-6f7bbf047f70\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.306134 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92b56\" (UniqueName: \"kubernetes.io/projected/651741dd-f535-40e3-ba34-96b9ce51cf6a-kube-api-access-92b56\") pod \"test-operator-controller-manager-69797bbcbd-zjrm8\" (UID: \"651741dd-f535-40e3-ba34-96b9ce51cf6a\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.306166 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgcd5\" (UniqueName: \"kubernetes.io/projected/f1bf10ee-2d99-4b1b-ab99-ae2066b96522-kube-api-access-xgcd5\") pod \"swift-operator-controller-manager-547cbdb99f-bnr2s\" (UID: \"f1bf10ee-2d99-4b1b-ab99-ae2066b96522\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.306249 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcmnx\" (UniqueName: \"kubernetes.io/projected/9a867ab9-ad43-409c-9d85-0ef229c5e25f-kube-api-access-rcmnx\") pod \"placement-operator-controller-manager-79d5ccc684-ld4p5\" (UID: \"9a867ab9-ad43-409c-9d85-0ef229c5e25f\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ld4p5" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.345625 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcmnx\" (UniqueName: \"kubernetes.io/projected/9a867ab9-ad43-409c-9d85-0ef229c5e25f-kube-api-access-rcmnx\") pod \"placement-operator-controller-manager-79d5ccc684-ld4p5\" (UID: \"9a867ab9-ad43-409c-9d85-0ef229c5e25f\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ld4p5" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.347111 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4scj\" (UniqueName: \"kubernetes.io/projected/b70555f3-c876-49fc-bd77-83efa82abac7-kube-api-access-l4scj\") pod \"ovn-operator-controller-manager-6f75f45d54-b276g\" (UID: \"b70555f3-c876-49fc-bd77-83efa82abac7\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.352679 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.357412 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.361032 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.361059 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-8b8js" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.361036 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.363092 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ld4p5" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.366561 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.380157 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njb2l"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.391240 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njb2l" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.395120 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-9x8lw" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.407418 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgcd5\" (UniqueName: \"kubernetes.io/projected/f1bf10ee-2d99-4b1b-ab99-ae2066b96522-kube-api-access-xgcd5\") pod \"swift-operator-controller-manager-547cbdb99f-bnr2s\" (UID: \"f1bf10ee-2d99-4b1b-ab99-ae2066b96522\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.407505 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert\") pod \"infra-operator-controller-manager-694cf4f878-rg997\" (UID: \"1a360ec7-efa3-4972-a655-3e21de960aec\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.407561 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrl7d\" (UniqueName: \"kubernetes.io/projected/9f55f343-0f75-4fed-ab7b-71c8dddd4af3-kube-api-access-mrl7d\") pod \"watcher-operator-controller-manager-564965969-q9t8p\" (UID: \"9f55f343-0f75-4fed-ab7b-71c8dddd4af3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-q9t8p" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.407640 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmkt2\" (UniqueName: \"kubernetes.io/projected/2dee9e4e-11c8-4db6-a457-6f7bbf047f70-kube-api-access-gmkt2\") pod \"telemetry-operator-controller-manager-85cd9769bb-bsh7f\" (UID: \"2dee9e4e-11c8-4db6-a457-6f7bbf047f70\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.407672 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92b56\" (UniqueName: \"kubernetes.io/projected/651741dd-f535-40e3-ba34-96b9ce51cf6a-kube-api-access-92b56\") pod \"test-operator-controller-manager-69797bbcbd-zjrm8\" (UID: \"651741dd-f535-40e3-ba34-96b9ce51cf6a\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8" Jan 28 15:19:40 crc kubenswrapper[4893]: E0128 15:19:40.408221 4893 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:40 crc kubenswrapper[4893]: E0128 15:19:40.408313 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert podName:1a360ec7-efa3-4972-a655-3e21de960aec nodeName:}" failed. No retries permitted until 2026-01-28 15:19:41.408288241 +0000 UTC m=+1099.181903309 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert") pod "infra-operator-controller-manager-694cf4f878-rg997" (UID: "1a360ec7-efa3-4972-a655-3e21de960aec") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.413558 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njb2l"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.454988 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgcd5\" (UniqueName: \"kubernetes.io/projected/f1bf10ee-2d99-4b1b-ab99-ae2066b96522-kube-api-access-xgcd5\") pod \"swift-operator-controller-manager-547cbdb99f-bnr2s\" (UID: \"f1bf10ee-2d99-4b1b-ab99-ae2066b96522\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.462545 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmkt2\" (UniqueName: \"kubernetes.io/projected/2dee9e4e-11c8-4db6-a457-6f7bbf047f70-kube-api-access-gmkt2\") pod \"telemetry-operator-controller-manager-85cd9769bb-bsh7f\" (UID: \"2dee9e4e-11c8-4db6-a457-6f7bbf047f70\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.485713 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92b56\" (UniqueName: \"kubernetes.io/projected/651741dd-f535-40e3-ba34-96b9ce51cf6a-kube-api-access-92b56\") pod \"test-operator-controller-manager-69797bbcbd-zjrm8\" (UID: \"651741dd-f535-40e3-ba34-96b9ce51cf6a\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.502786 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.511765 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.511827 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-989h2\" (UniqueName: \"kubernetes.io/projected/d2a88a4d-0cb7-40fd-8e25-74e67785af15-kube-api-access-989h2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-njb2l\" (UID: \"d2a88a4d-0cb7-40fd-8e25-74e67785af15\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njb2l" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.511855 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxpsp\" (UniqueName: \"kubernetes.io/projected/24fb3958-2b40-4b9d-90ee-591dafc3987e-kube-api-access-lxpsp\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.511880 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrl7d\" (UniqueName: \"kubernetes.io/projected/9f55f343-0f75-4fed-ab7b-71c8dddd4af3-kube-api-access-mrl7d\") pod \"watcher-operator-controller-manager-564965969-q9t8p\" (UID: \"9f55f343-0f75-4fed-ab7b-71c8dddd4af3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-q9t8p" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.511913 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.513661 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.537945 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrl7d\" (UniqueName: \"kubernetes.io/projected/9f55f343-0f75-4fed-ab7b-71c8dddd4af3-kube-api-access-mrl7d\") pod \"watcher-operator-controller-manager-564965969-q9t8p\" (UID: \"9f55f343-0f75-4fed-ab7b-71c8dddd4af3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-q9t8p" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.561786 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.615823 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.615899 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-989h2\" (UniqueName: \"kubernetes.io/projected/d2a88a4d-0cb7-40fd-8e25-74e67785af15-kube-api-access-989h2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-njb2l\" (UID: \"d2a88a4d-0cb7-40fd-8e25-74e67785af15\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njb2l" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.615929 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxpsp\" (UniqueName: \"kubernetes.io/projected/24fb3958-2b40-4b9d-90ee-591dafc3987e-kube-api-access-lxpsp\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.615965 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:40 crc kubenswrapper[4893]: E0128 15:19:40.616691 4893 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:19:40 crc kubenswrapper[4893]: E0128 15:19:40.616758 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs podName:24fb3958-2b40-4b9d-90ee-591dafc3987e nodeName:}" failed. No retries permitted until 2026-01-28 15:19:41.116742272 +0000 UTC m=+1098.890357300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs") pod "openstack-operator-controller-manager-5fd66b5d9c-j5x2h" (UID: "24fb3958-2b40-4b9d-90ee-591dafc3987e") : secret "webhook-server-cert" not found Jan 28 15:19:40 crc kubenswrapper[4893]: E0128 15:19:40.620039 4893 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:19:40 crc kubenswrapper[4893]: E0128 15:19:40.620092 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs podName:24fb3958-2b40-4b9d-90ee-591dafc3987e nodeName:}" failed. No retries permitted until 2026-01-28 15:19:41.120080213 +0000 UTC m=+1098.893695231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs") pod "openstack-operator-controller-manager-5fd66b5d9c-j5x2h" (UID: "24fb3958-2b40-4b9d-90ee-591dafc3987e") : secret "metrics-server-cert" not found Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.653863 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-989h2\" (UniqueName: \"kubernetes.io/projected/d2a88a4d-0cb7-40fd-8e25-74e67785af15-kube-api-access-989h2\") pod \"rabbitmq-cluster-operator-manager-668c99d594-njb2l\" (UID: \"d2a88a4d-0cb7-40fd-8e25-74e67785af15\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njb2l" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.654517 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.689612 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxpsp\" (UniqueName: \"kubernetes.io/projected/24fb3958-2b40-4b9d-90ee-591dafc3987e-kube-api-access-lxpsp\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.707858 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.723853 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt\" (UID: \"bfe9e7f0-b5aa-48a6-9487-e1765752c644\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:19:40 crc kubenswrapper[4893]: E0128 15:19:40.724060 4893 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:40 crc kubenswrapper[4893]: E0128 15:19:40.724319 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert podName:bfe9e7f0-b5aa-48a6-9487-e1765752c644 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:41.724301572 +0000 UTC m=+1099.497916600 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" (UID: "bfe9e7f0-b5aa-48a6-9487-e1765752c644") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.745265 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-q9t8p" Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.773194 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vdcjn"] Jan 28 15:19:40 crc kubenswrapper[4893]: I0128 15:19:40.819887 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njb2l" Jan 28 15:19:40 crc kubenswrapper[4893]: W0128 15:19:40.951412 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d2e324_70de_4019_9673_0a86620ca028.slice/crio-1a3e428394c188bd8134b330fdbcca7fa2fb3207162cf4e15504d41ff3ed64c4 WatchSource:0}: Error finding container 1a3e428394c188bd8134b330fdbcca7fa2fb3207162cf4e15504d41ff3ed64c4: Status 404 returned error can't find the container with id 1a3e428394c188bd8134b330fdbcca7fa2fb3207162cf4e15504d41ff3ed64c4 Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.115565 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-jnhg7"] Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.139425 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.140532 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-j8x44"] Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.140676 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-p6nxj"] Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.140814 4893 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.140874 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs podName:24fb3958-2b40-4b9d-90ee-591dafc3987e nodeName:}" failed. No retries permitted until 2026-01-28 15:19:42.140854863 +0000 UTC m=+1099.914469971 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs") pod "openstack-operator-controller-manager-5fd66b5d9c-j5x2h" (UID: "24fb3958-2b40-4b9d-90ee-591dafc3987e") : secret "metrics-server-cert" not found Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.140898 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.141069 4893 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.141092 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs podName:24fb3958-2b40-4b9d-90ee-591dafc3987e nodeName:}" failed. No retries permitted until 2026-01-28 15:19:42.141084449 +0000 UTC m=+1099.914699567 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs") pod "openstack-operator-controller-manager-5fd66b5d9c-j5x2h" (UID: "24fb3958-2b40-4b9d-90ee-591dafc3987e") : secret "webhook-server-cert" not found Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.187290 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dqldg"] Jan 28 15:19:41 crc kubenswrapper[4893]: W0128 15:19:41.252110 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dcd4cb9_92c5_4fb0_9718_79fe6b7d2cea.slice/crio-cc6d62966b8129eb8129925f1fc0005c95c54b466c6369ae31932ba366362406 WatchSource:0}: Error finding container cc6d62966b8129eb8129925f1fc0005c95c54b466c6369ae31932ba366362406: Status 404 returned error can't find the container with id cc6d62966b8129eb8129925f1fc0005c95c54b466c6369ae31932ba366362406 Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.445583 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert\") pod \"infra-operator-controller-manager-694cf4f878-rg997\" (UID: \"1a360ec7-efa3-4972-a655-3e21de960aec\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.445838 4893 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.445887 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert podName:1a360ec7-efa3-4972-a655-3e21de960aec nodeName:}" failed. No retries permitted until 2026-01-28 15:19:43.445870454 +0000 UTC m=+1101.219485482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert") pod "infra-operator-controller-manager-694cf4f878-rg997" (UID: "1a360ec7-efa3-4972-a655-3e21de960aec") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:41 crc kubenswrapper[4893]: W0128 15:19:41.462329 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c9ab96_9196_4834_b516_8d1c9564bf35.slice/crio-70ce574bbcdc32b99f4f892179384b9f50c3916b4e479ed7ff558d93f5a89888 WatchSource:0}: Error finding container 70ce574bbcdc32b99f4f892179384b9f50c3916b4e479ed7ff558d93f5a89888: Status 404 returned error can't find the container with id 70ce574bbcdc32b99f4f892179384b9f50c3916b4e479ed7ff558d93f5a89888 Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.463016 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-jfx6g"] Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.482429 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm"] Jan 28 15:19:41 crc kubenswrapper[4893]: W0128 15:19:41.494346 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4179ac2f_dd41_4cd3_8558_6daba8252582.slice/crio-bf4373def06e13bc3b701bd319559949615964e48d4996186751f67447566004 WatchSource:0}: Error finding container bf4373def06e13bc3b701bd319559949615964e48d4996186751f67447566004: Status 404 returned error can't find the container with id bf4373def06e13bc3b701bd319559949615964e48d4996186751f67447566004 Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.574445 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dqldg" event={"ID":"0dcd4cb9-92c5-4fb0-9718-79fe6b7d2cea","Type":"ContainerStarted","Data":"cc6d62966b8129eb8129925f1fc0005c95c54b466c6369ae31932ba366362406"} Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.580257 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jfx6g" event={"ID":"20c9ab96-9196-4834-b516-8d1c9564bf35","Type":"ContainerStarted","Data":"70ce574bbcdc32b99f4f892179384b9f50c3916b4e479ed7ff558d93f5a89888"} Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.581703 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vdcjn" event={"ID":"72d2e324-70de-4019-9673-0a86620ca028","Type":"ContainerStarted","Data":"1a3e428394c188bd8134b330fdbcca7fa2fb3207162cf4e15504d41ff3ed64c4"} Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.586235 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j8x44" event={"ID":"0e525c35-621a-43f8-a8c6-9a472607373d","Type":"ContainerStarted","Data":"e9218f4a78b3a45033ad0c44605f827305c6bbfb6e48be104d9f5f9cb760d57b"} Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.588825 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jnhg7" event={"ID":"17019a37-b628-4464-b037-470c2be80308","Type":"ContainerStarted","Data":"6c0d66c0a226519ae0cbff009cfe48ab50cbbfa719539ef8cd5ebfeb8b2e3bd2"} Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.590431 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p6nxj" event={"ID":"c2188ba2-ad62-4873-abfe-fa7ad88b57a6","Type":"ContainerStarted","Data":"e07bcc77f93e1bdf79aa4d2fae38b0c9d43e81680a7d289e55859b0c2a5032eb"} Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.591443 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm" event={"ID":"4179ac2f-dd41-4cd3-8558-6daba8252582","Type":"ContainerStarted","Data":"bf4373def06e13bc3b701bd319559949615964e48d4996186751f67447566004"} Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.627308 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-ld4p5"] Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.638695 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm"] Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.662556 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qbfns"] Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.689863 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b6cft"] Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.695708 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q"] Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.699015 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s"] Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.705546 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2"] Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.708081 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g"] Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.746324 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6"] Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.755422 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8"] Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.756361 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt\" (UID: \"bfe9e7f0-b5aa-48a6-9487-e1765752c644\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.756581 4893 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.756641 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert podName:bfe9e7f0-b5aa-48a6-9487-e1765752c644 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:43.756623831 +0000 UTC m=+1101.530238859 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" (UID: "bfe9e7f0-b5aa-48a6-9487-e1765752c644") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:41 crc kubenswrapper[4893]: W0128 15:19:41.792452 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1e458d4_37a1_4111_9e2d_fa49cbdd9e08.slice/crio-290d76f1726f49c3b2116c2854b4acd106ad298abc750c2b1ff7f2a2c47b158f WatchSource:0}: Error finding container 290d76f1726f49c3b2116c2854b4acd106ad298abc750c2b1ff7f2a2c47b158f: Status 404 returned error can't find the container with id 290d76f1726f49c3b2116c2854b4acd106ad298abc750c2b1ff7f2a2c47b158f Jan 28 15:19:41 crc kubenswrapper[4893]: W0128 15:19:41.811731 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7740f64d_b660_493b_b3f5_1041a0ce3061.slice/crio-15e20f82910c8ff2697cadbcfc51e0dc0cbe8d39c74b215133c1badf4cb2070a WatchSource:0}: Error finding container 15e20f82910c8ff2697cadbcfc51e0dc0cbe8d39c74b215133c1badf4cb2070a: Status 404 returned error can't find the container with id 15e20f82910c8ff2697cadbcfc51e0dc0cbe8d39c74b215133c1badf4cb2070a Jan 28 15:19:41 crc kubenswrapper[4893]: W0128 15:19:41.843630 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod651741dd_f535_40e3_ba34_96b9ce51cf6a.slice/crio-c616ba4cc604de79e6cfc968a5eb99f5878fddb2b234363db83bb1bb2213b17a WatchSource:0}: Error finding container c616ba4cc604de79e6cfc968a5eb99f5878fddb2b234363db83bb1bb2213b17a: Status 404 returned error can't find the container with id c616ba4cc604de79e6cfc968a5eb99f5878fddb2b234363db83bb1bb2213b17a Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.868084 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.75:5001/openstack-k8s-operators/nova-operator:f1cc53e6933b12c4595ceed3502877393a59649f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2hmlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-75d84bc6b9-s5v4q_openstack-operators(e130bc9f-0869-42a0-922b-db361e6b26f3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.869325 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" podUID="e130bc9f-0869-42a0-922b-db361e6b26f3" Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.886015 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njb2l"] Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.899063 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f"] Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.900847 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l4scj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-b276g_openstack-operators(b70555f3-c876-49fc-bd77-83efa82abac7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.901249 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-92b56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-zjrm8_openstack-operators(651741dd-f535-40e3-ba34-96b9ce51cf6a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.902238 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g" podUID="b70555f3-c876-49fc-bd77-83efa82abac7" Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.903128 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8" podUID="651741dd-f535-40e3-ba34-96b9ce51cf6a" Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.905825 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gmkt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-bsh7f_openstack-operators(2dee9e4e-11c8-4db6-a457-6f7bbf047f70): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.909579 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f" podUID="2dee9e4e-11c8-4db6-a457-6f7bbf047f70" Jan 28 15:19:41 crc kubenswrapper[4893]: I0128 15:19:41.914340 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-q9t8p"] Jan 28 15:19:41 crc kubenswrapper[4893]: W0128 15:19:41.919984 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f55f343_0f75_4fed_ab7b_71c8dddd4af3.slice/crio-52e120d669e64495bd74aa52fd1a7a901704549b8d2135eb133a89c2527a6054 WatchSource:0}: Error finding container 52e120d669e64495bd74aa52fd1a7a901704549b8d2135eb133a89c2527a6054: Status 404 returned error can't find the container with id 52e120d669e64495bd74aa52fd1a7a901704549b8d2135eb133a89c2527a6054 Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.920296 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-989h2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-njb2l_openstack-operators(d2a88a4d-0cb7-40fd-8e25-74e67785af15): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.921580 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njb2l" podUID="d2a88a4d-0cb7-40fd-8e25-74e67785af15" Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.922694 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mrl7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-q9t8p_openstack-operators(9f55f343-0f75-4fed-ab7b-71c8dddd4af3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:19:41 crc kubenswrapper[4893]: E0128 15:19:41.923958 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-q9t8p" podUID="9f55f343-0f75-4fed-ab7b-71c8dddd4af3" Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.163163 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.163285 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:42 crc kubenswrapper[4893]: E0128 15:19:42.163573 4893 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:19:42 crc kubenswrapper[4893]: E0128 15:19:42.163639 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs podName:24fb3958-2b40-4b9d-90ee-591dafc3987e nodeName:}" failed. No retries permitted until 2026-01-28 15:19:44.16362101 +0000 UTC m=+1101.937236038 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs") pod "openstack-operator-controller-manager-5fd66b5d9c-j5x2h" (UID: "24fb3958-2b40-4b9d-90ee-591dafc3987e") : secret "metrics-server-cert" not found Jan 28 15:19:42 crc kubenswrapper[4893]: E0128 15:19:42.163667 4893 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:19:42 crc kubenswrapper[4893]: E0128 15:19:42.163741 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs podName:24fb3958-2b40-4b9d-90ee-591dafc3987e nodeName:}" failed. No retries permitted until 2026-01-28 15:19:44.163721473 +0000 UTC m=+1101.937336501 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs") pod "openstack-operator-controller-manager-5fd66b5d9c-j5x2h" (UID: "24fb3958-2b40-4b9d-90ee-591dafc3987e") : secret "webhook-server-cert" not found Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.609064 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ld4p5" event={"ID":"9a867ab9-ad43-409c-9d85-0ef229c5e25f","Type":"ContainerStarted","Data":"fd89731c9120d6443bf1943dd70048d3511b4e70951aafa67dd615062adf59bb"} Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.610564 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b6cft" event={"ID":"379dbcd5-96e3-4563-ac73-7264f4b90d68","Type":"ContainerStarted","Data":"fef4b969bdb54becedbcd5b808942d70c538381ab81f06ffa7509c20c9655b08"} Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.613228 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8" event={"ID":"651741dd-f535-40e3-ba34-96b9ce51cf6a","Type":"ContainerStarted","Data":"c616ba4cc604de79e6cfc968a5eb99f5878fddb2b234363db83bb1bb2213b17a"} Jan 28 15:19:42 crc kubenswrapper[4893]: E0128 15:19:42.619455 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8" podUID="651741dd-f535-40e3-ba34-96b9ce51cf6a" Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.624508 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g" event={"ID":"b70555f3-c876-49fc-bd77-83efa82abac7","Type":"ContainerStarted","Data":"da7b0612fec011304a359e730a20d57535c2d492ded5852dee138ba05e0a38bb"} Jan 28 15:19:42 crc kubenswrapper[4893]: E0128 15:19:42.627181 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g" podUID="b70555f3-c876-49fc-bd77-83efa82abac7" Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.628631 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njb2l" event={"ID":"d2a88a4d-0cb7-40fd-8e25-74e67785af15","Type":"ContainerStarted","Data":"c57a627769217018192bf5b228c753a566c5ad79fcd822ff8ea2834da41e87db"} Jan 28 15:19:42 crc kubenswrapper[4893]: E0128 15:19:42.632347 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njb2l" podUID="d2a88a4d-0cb7-40fd-8e25-74e67785af15" Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.639982 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-q9t8p" event={"ID":"9f55f343-0f75-4fed-ab7b-71c8dddd4af3","Type":"ContainerStarted","Data":"52e120d669e64495bd74aa52fd1a7a901704549b8d2135eb133a89c2527a6054"} Jan 28 15:19:42 crc kubenswrapper[4893]: E0128 15:19:42.642934 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-q9t8p" podUID="9f55f343-0f75-4fed-ab7b-71c8dddd4af3" Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.644419 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qbfns" event={"ID":"a5872ed3-9a06-4bd2-b592-b42c548a1db4","Type":"ContainerStarted","Data":"70dcf2bf015feefc03d8875ef5da7aaf11a45bc40d749963875c18062855db0d"} Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.651426 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2" event={"ID":"7740f64d-b660-493b-b3f5-1041a0ce3061","Type":"ContainerStarted","Data":"15e20f82910c8ff2697cadbcfc51e0dc0cbe8d39c74b215133c1badf4cb2070a"} Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.655064 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6" event={"ID":"e1e458d4-37a1-4111-9e2d-fa49cbdd9e08","Type":"ContainerStarted","Data":"290d76f1726f49c3b2116c2854b4acd106ad298abc750c2b1ff7f2a2c47b158f"} Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.665025 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s" event={"ID":"f1bf10ee-2d99-4b1b-ab99-ae2066b96522","Type":"ContainerStarted","Data":"14cb78cdd99177c21908cce1b3b48752b934e955f947b703cc217e1ff9d892aa"} Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.670793 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f" event={"ID":"2dee9e4e-11c8-4db6-a457-6f7bbf047f70","Type":"ContainerStarted","Data":"4b51bca90d1acdfc2d45caa279070a5ed51364172c9e5148398c9866911fe975"} Jan 28 15:19:42 crc kubenswrapper[4893]: E0128 15:19:42.672067 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f" podUID="2dee9e4e-11c8-4db6-a457-6f7bbf047f70" Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.673412 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm" event={"ID":"d578cfaa-0b09-476e-9cd0-abd3d6274bd7","Type":"ContainerStarted","Data":"3238368e36d1136897913c335df53d9eb95f8ba8f50d2900dff7eb483901bf39"} Jan 28 15:19:42 crc kubenswrapper[4893]: I0128 15:19:42.681343 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" event={"ID":"e130bc9f-0869-42a0-922b-db361e6b26f3","Type":"ContainerStarted","Data":"3afa2597ebc877a41bab02a9cf8a44d3a4eba33e1c757a2a9e987cd5b47842e4"} Jan 28 15:19:42 crc kubenswrapper[4893]: E0128 15:19:42.683147 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.75:5001/openstack-k8s-operators/nova-operator:f1cc53e6933b12c4595ceed3502877393a59649f\\\"\"" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" podUID="e130bc9f-0869-42a0-922b-db361e6b26f3" Jan 28 15:19:43 crc kubenswrapper[4893]: I0128 15:19:43.485892 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert\") pod \"infra-operator-controller-manager-694cf4f878-rg997\" (UID: \"1a360ec7-efa3-4972-a655-3e21de960aec\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:19:43 crc kubenswrapper[4893]: E0128 15:19:43.486058 4893 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:43 crc kubenswrapper[4893]: E0128 15:19:43.486159 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert podName:1a360ec7-efa3-4972-a655-3e21de960aec nodeName:}" failed. No retries permitted until 2026-01-28 15:19:47.486140174 +0000 UTC m=+1105.259755202 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert") pod "infra-operator-controller-manager-694cf4f878-rg997" (UID: "1a360ec7-efa3-4972-a655-3e21de960aec") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:43 crc kubenswrapper[4893]: E0128 15:19:43.693877 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njb2l" podUID="d2a88a4d-0cb7-40fd-8e25-74e67785af15" Jan 28 15:19:43 crc kubenswrapper[4893]: E0128 15:19:43.695067 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f" podUID="2dee9e4e-11c8-4db6-a457-6f7bbf047f70" Jan 28 15:19:43 crc kubenswrapper[4893]: E0128 15:19:43.697164 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8" podUID="651741dd-f535-40e3-ba34-96b9ce51cf6a" Jan 28 15:19:43 crc kubenswrapper[4893]: E0128 15:19:43.697223 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g" podUID="b70555f3-c876-49fc-bd77-83efa82abac7" Jan 28 15:19:43 crc kubenswrapper[4893]: E0128 15:19:43.697299 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.75:5001/openstack-k8s-operators/nova-operator:f1cc53e6933b12c4595ceed3502877393a59649f\\\"\"" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" podUID="e130bc9f-0869-42a0-922b-db361e6b26f3" Jan 28 15:19:43 crc kubenswrapper[4893]: E0128 15:19:43.697586 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-q9t8p" podUID="9f55f343-0f75-4fed-ab7b-71c8dddd4af3" Jan 28 15:19:43 crc kubenswrapper[4893]: I0128 15:19:43.793154 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt\" (UID: \"bfe9e7f0-b5aa-48a6-9487-e1765752c644\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:19:43 crc kubenswrapper[4893]: E0128 15:19:43.793841 4893 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:43 crc kubenswrapper[4893]: E0128 15:19:43.793906 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert podName:bfe9e7f0-b5aa-48a6-9487-e1765752c644 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:47.793889539 +0000 UTC m=+1105.567504567 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" (UID: "bfe9e7f0-b5aa-48a6-9487-e1765752c644") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:44 crc kubenswrapper[4893]: I0128 15:19:44.253684 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:44 crc kubenswrapper[4893]: I0128 15:19:44.253786 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:44 crc kubenswrapper[4893]: E0128 15:19:44.253900 4893 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:19:44 crc kubenswrapper[4893]: E0128 15:19:44.253948 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs podName:24fb3958-2b40-4b9d-90ee-591dafc3987e nodeName:}" failed. No retries permitted until 2026-01-28 15:19:48.253934929 +0000 UTC m=+1106.027549957 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs") pod "openstack-operator-controller-manager-5fd66b5d9c-j5x2h" (UID: "24fb3958-2b40-4b9d-90ee-591dafc3987e") : secret "webhook-server-cert" not found Jan 28 15:19:44 crc kubenswrapper[4893]: E0128 15:19:44.253991 4893 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:19:44 crc kubenswrapper[4893]: E0128 15:19:44.254009 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs podName:24fb3958-2b40-4b9d-90ee-591dafc3987e nodeName:}" failed. No retries permitted until 2026-01-28 15:19:48.25400375 +0000 UTC m=+1106.027618778 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs") pod "openstack-operator-controller-manager-5fd66b5d9c-j5x2h" (UID: "24fb3958-2b40-4b9d-90ee-591dafc3987e") : secret "metrics-server-cert" not found Jan 28 15:19:47 crc kubenswrapper[4893]: I0128 15:19:47.534375 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert\") pod \"infra-operator-controller-manager-694cf4f878-rg997\" (UID: \"1a360ec7-efa3-4972-a655-3e21de960aec\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:19:47 crc kubenswrapper[4893]: E0128 15:19:47.534812 4893 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:47 crc kubenswrapper[4893]: E0128 15:19:47.535137 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert podName:1a360ec7-efa3-4972-a655-3e21de960aec nodeName:}" failed. No retries permitted until 2026-01-28 15:19:55.535116341 +0000 UTC m=+1113.308731379 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert") pod "infra-operator-controller-manager-694cf4f878-rg997" (UID: "1a360ec7-efa3-4972-a655-3e21de960aec") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:47 crc kubenswrapper[4893]: I0128 15:19:47.843299 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt\" (UID: \"bfe9e7f0-b5aa-48a6-9487-e1765752c644\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:19:47 crc kubenswrapper[4893]: E0128 15:19:47.843551 4893 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:47 crc kubenswrapper[4893]: E0128 15:19:47.843605 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert podName:bfe9e7f0-b5aa-48a6-9487-e1765752c644 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:55.843583396 +0000 UTC m=+1113.617198424 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" (UID: "bfe9e7f0-b5aa-48a6-9487-e1765752c644") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:48 crc kubenswrapper[4893]: I0128 15:19:48.356798 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:48 crc kubenswrapper[4893]: I0128 15:19:48.356987 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:48 crc kubenswrapper[4893]: E0128 15:19:48.357971 4893 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:19:48 crc kubenswrapper[4893]: E0128 15:19:48.358120 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs podName:24fb3958-2b40-4b9d-90ee-591dafc3987e nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.358090115 +0000 UTC m=+1114.131705143 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs") pod "openstack-operator-controller-manager-5fd66b5d9c-j5x2h" (UID: "24fb3958-2b40-4b9d-90ee-591dafc3987e") : secret "webhook-server-cert" not found Jan 28 15:19:48 crc kubenswrapper[4893]: E0128 15:19:48.358648 4893 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:19:48 crc kubenswrapper[4893]: E0128 15:19:48.358800 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs podName:24fb3958-2b40-4b9d-90ee-591dafc3987e nodeName:}" failed. No retries permitted until 2026-01-28 15:19:56.358769183 +0000 UTC m=+1114.132384211 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs") pod "openstack-operator-controller-manager-5fd66b5d9c-j5x2h" (UID: "24fb3958-2b40-4b9d-90ee-591dafc3987e") : secret "metrics-server-cert" not found Jan 28 15:19:55 crc kubenswrapper[4893]: E0128 15:19:55.416466 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 28 15:19:55 crc kubenswrapper[4893]: E0128 15:19:55.417242 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xgcd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-bnr2s_openstack-operators(f1bf10ee-2d99-4b1b-ab99-ae2066b96522): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:19:55 crc kubenswrapper[4893]: E0128 15:19:55.418593 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s" podUID="f1bf10ee-2d99-4b1b-ab99-ae2066b96522" Jan 28 15:19:55 crc kubenswrapper[4893]: I0128 15:19:55.579893 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert\") pod \"infra-operator-controller-manager-694cf4f878-rg997\" (UID: \"1a360ec7-efa3-4972-a655-3e21de960aec\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:19:55 crc kubenswrapper[4893]: I0128 15:19:55.586034 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1a360ec7-efa3-4972-a655-3e21de960aec-cert\") pod \"infra-operator-controller-manager-694cf4f878-rg997\" (UID: \"1a360ec7-efa3-4972-a655-3e21de960aec\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:19:55 crc kubenswrapper[4893]: E0128 15:19:55.780494 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s" podUID="f1bf10ee-2d99-4b1b-ab99-ae2066b96522" Jan 28 15:19:55 crc kubenswrapper[4893]: I0128 15:19:55.865057 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:19:55 crc kubenswrapper[4893]: I0128 15:19:55.888961 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt\" (UID: \"bfe9e7f0-b5aa-48a6-9487-e1765752c644\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:19:55 crc kubenswrapper[4893]: E0128 15:19:55.889151 4893 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:55 crc kubenswrapper[4893]: E0128 15:19:55.889221 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert podName:bfe9e7f0-b5aa-48a6-9487-e1765752c644 nodeName:}" failed. No retries permitted until 2026-01-28 15:20:11.889199589 +0000 UTC m=+1129.662814617 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" (UID: "bfe9e7f0-b5aa-48a6-9487-e1765752c644") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:56 crc kubenswrapper[4893]: E0128 15:19:56.059032 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 28 15:19:56 crc kubenswrapper[4893]: E0128 15:19:56.059245 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-84s5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-nd8rm_openstack-operators(d578cfaa-0b09-476e-9cd0-abd3d6274bd7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:19:56 crc kubenswrapper[4893]: E0128 15:19:56.061250 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm" podUID="d578cfaa-0b09-476e-9cd0-abd3d6274bd7" Jan 28 15:19:56 crc kubenswrapper[4893]: I0128 15:19:56.397991 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:56 crc kubenswrapper[4893]: I0128 15:19:56.398128 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:19:56 crc kubenswrapper[4893]: E0128 15:19:56.398262 4893 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:19:56 crc kubenswrapper[4893]: E0128 15:19:56.398385 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs podName:24fb3958-2b40-4b9d-90ee-591dafc3987e nodeName:}" failed. No retries permitted until 2026-01-28 15:20:12.398357372 +0000 UTC m=+1130.171972540 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs") pod "openstack-operator-controller-manager-5fd66b5d9c-j5x2h" (UID: "24fb3958-2b40-4b9d-90ee-591dafc3987e") : secret "metrics-server-cert" not found Jan 28 15:19:56 crc kubenswrapper[4893]: E0128 15:19:56.398274 4893 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:19:56 crc kubenswrapper[4893]: E0128 15:19:56.398467 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs podName:24fb3958-2b40-4b9d-90ee-591dafc3987e nodeName:}" failed. No retries permitted until 2026-01-28 15:20:12.398446245 +0000 UTC m=+1130.172061273 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs") pod "openstack-operator-controller-manager-5fd66b5d9c-j5x2h" (UID: "24fb3958-2b40-4b9d-90ee-591dafc3987e") : secret "webhook-server-cert" not found Jan 28 15:19:56 crc kubenswrapper[4893]: E0128 15:19:56.791382 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm" podUID="d578cfaa-0b09-476e-9cd0-abd3d6274bd7" Jan 28 15:19:56 crc kubenswrapper[4893]: E0128 15:19:56.898964 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 28 15:19:56 crc kubenswrapper[4893]: E0128 15:19:56.899147 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tgq2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-dlrsm_openstack-operators(4179ac2f-dd41-4cd3-8558-6daba8252582): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:19:56 crc kubenswrapper[4893]: E0128 15:19:56.900930 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm" podUID="4179ac2f-dd41-4cd3-8558-6daba8252582" Jan 28 15:19:57 crc kubenswrapper[4893]: E0128 15:19:57.717174 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 28 15:19:57 crc kubenswrapper[4893]: E0128 15:19:57.717775 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-88dh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-2qgj6_openstack-operators(e1e458d4-37a1-4111-9e2d-fa49cbdd9e08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:19:57 crc kubenswrapper[4893]: E0128 15:19:57.719361 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6" podUID="e1e458d4-37a1-4111-9e2d-fa49cbdd9e08" Jan 28 15:19:57 crc kubenswrapper[4893]: E0128 15:19:57.802081 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm" podUID="4179ac2f-dd41-4cd3-8558-6daba8252582" Jan 28 15:19:57 crc kubenswrapper[4893]: E0128 15:19:57.803730 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6" podUID="e1e458d4-37a1-4111-9e2d-fa49cbdd9e08" Jan 28 15:19:58 crc kubenswrapper[4893]: E0128 15:19:58.346886 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 28 15:19:58 crc kubenswrapper[4893]: E0128 15:19:58.347054 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fcdrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-4rgm2_openstack-operators(7740f64d-b660-493b-b3f5-1041a0ce3061): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:19:58 crc kubenswrapper[4893]: E0128 15:19:58.348280 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2" podUID="7740f64d-b660-493b-b3f5-1041a0ce3061" Jan 28 15:19:58 crc kubenswrapper[4893]: E0128 15:19:58.807916 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2" podUID="7740f64d-b660-493b-b3f5-1041a0ce3061" Jan 28 15:20:01 crc kubenswrapper[4893]: I0128 15:20:01.494581 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-rg997"] Jan 28 15:20:02 crc kubenswrapper[4893]: I0128 15:20:02.833357 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" event={"ID":"1a360ec7-efa3-4972-a655-3e21de960aec","Type":"ContainerStarted","Data":"52a5108c1ea77c5749b1bb7131b87543ae5c3ff5993ce912b9b0da501d9d1d78"} Jan 28 15:20:05 crc kubenswrapper[4893]: I0128 15:20:05.854418 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jfx6g" event={"ID":"20c9ab96-9196-4834-b516-8d1c9564bf35","Type":"ContainerStarted","Data":"bef9ee1e7bbbeb31a68212068aa1e3d3a7e735219318281c0afbbc986e93afaf"} Jan 28 15:20:05 crc kubenswrapper[4893]: I0128 15:20:05.854810 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jfx6g" Jan 28 15:20:05 crc kubenswrapper[4893]: I0128 15:20:05.874235 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jfx6g" podStartSLOduration=10.003703859 podStartE2EDuration="26.874202035s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.465167572 +0000 UTC m=+1099.238782600" lastFinishedPulling="2026-01-28 15:19:58.335665748 +0000 UTC m=+1116.109280776" observedRunningTime="2026-01-28 15:20:05.870965567 +0000 UTC m=+1123.644580595" watchObservedRunningTime="2026-01-28 15:20:05.874202035 +0000 UTC m=+1123.647817083" Jan 28 15:20:06 crc kubenswrapper[4893]: I0128 15:20:06.864669 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dqldg" event={"ID":"0dcd4cb9-92c5-4fb0-9718-79fe6b7d2cea","Type":"ContainerStarted","Data":"d10a7cb5ef1f03e532895298ccb23de84c6d8070360b9cfe22d18970dba28fbe"} Jan 28 15:20:06 crc kubenswrapper[4893]: I0128 15:20:06.865231 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dqldg" Jan 28 15:20:06 crc kubenswrapper[4893]: I0128 15:20:06.883145 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dqldg" podStartSLOduration=12.252184603 podStartE2EDuration="27.883118584s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.254513681 +0000 UTC m=+1099.028128709" lastFinishedPulling="2026-01-28 15:19:56.885447662 +0000 UTC m=+1114.659062690" observedRunningTime="2026-01-28 15:20:06.879629928 +0000 UTC m=+1124.653244956" watchObservedRunningTime="2026-01-28 15:20:06.883118584 +0000 UTC m=+1124.656733612" Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.879610 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njb2l" event={"ID":"d2a88a4d-0cb7-40fd-8e25-74e67785af15","Type":"ContainerStarted","Data":"de88bb078210ecb6024e0ff90c08d266f8ffd21322b88df6fa907ad8017d9895"} Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.881246 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" event={"ID":"e130bc9f-0869-42a0-922b-db361e6b26f3","Type":"ContainerStarted","Data":"b8013829a44f75ba0b541f002acd9e81a74b3b2d38a28b692babfa35289b5eed"} Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.881418 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.882310 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qbfns" event={"ID":"a5872ed3-9a06-4bd2-b592-b42c548a1db4","Type":"ContainerStarted","Data":"560860b9688d093733fc74d0db811a4da1312491b4184a5d04c207771587cf9b"} Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.882717 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qbfns" Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.884432 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" event={"ID":"1a360ec7-efa3-4972-a655-3e21de960aec","Type":"ContainerStarted","Data":"266bb7acfacef26efef55eb332a233703660b9a7e275b722d9efc17c133791c9"} Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.884827 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.886005 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jnhg7" event={"ID":"17019a37-b628-4464-b037-470c2be80308","Type":"ContainerStarted","Data":"746676f1421829c44899f2c274ede7d6b4b689fd7ba5e0ae8f7b49aaf8d3d6ad"} Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.886389 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jnhg7" Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.915892 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j8x44" event={"ID":"0e525c35-621a-43f8-a8c6-9a472607373d","Type":"ContainerStarted","Data":"8914b704a5d6e7a592d2fdaf5b06ac9fbb64392813e0ee5f5aa171baaae905af"} Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.915935 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j8x44" Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.915947 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ld4p5" event={"ID":"9a867ab9-ad43-409c-9d85-0ef229c5e25f","Type":"ContainerStarted","Data":"5214d1ccf5badfb479ebbf311105a7d34b7d0d54b8c585b2fa06d033544293bf"} Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.915963 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ld4p5" Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.939824 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p6nxj" event={"ID":"c2188ba2-ad62-4873-abfe-fa7ad88b57a6","Type":"ContainerStarted","Data":"10639436ed75f4c8e8349a36b2c6a3a037a3fe5a4168ab6076c930c34c108ca9"} Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.940823 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p6nxj" Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.976016 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-njb2l" podStartSLOduration=2.997000368 podStartE2EDuration="28.975986942s" podCreationTimestamp="2026-01-28 15:19:40 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.919983388 +0000 UTC m=+1099.693598416" lastFinishedPulling="2026-01-28 15:20:07.898969962 +0000 UTC m=+1125.672584990" observedRunningTime="2026-01-28 15:20:08.94114738 +0000 UTC m=+1126.714762418" watchObservedRunningTime="2026-01-28 15:20:08.975986942 +0000 UTC m=+1126.749601970" Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.978371 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8" event={"ID":"651741dd-f535-40e3-ba34-96b9ce51cf6a","Type":"ContainerStarted","Data":"1ced3af423486bd85f7ae23f70612592351c77616fde4a76a60db66b74b4180f"} Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.978855 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8" Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.994047 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vdcjn" event={"ID":"72d2e324-70de-4019-9673-0a86620ca028","Type":"ContainerStarted","Data":"efbcc7f3f217318bb7a496513974c804a4dd0ddbd338bd86071dd8258bfb2a87"} Jan 28 15:20:08 crc kubenswrapper[4893]: I0128 15:20:08.994776 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vdcjn" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.000013 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-q9t8p" event={"ID":"9f55f343-0f75-4fed-ab7b-71c8dddd4af3","Type":"ContainerStarted","Data":"7d1da28a81163efab47008069bcce79c21b6cb5d4b28fd4d0a5911163b0bb6c5"} Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.000317 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-q9t8p" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.008137 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s" event={"ID":"f1bf10ee-2d99-4b1b-ab99-ae2066b96522","Type":"ContainerStarted","Data":"0ba4654e5ada8ffccb756a6626a9d927cad66445d523234c29118a1759fd3e4a"} Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.008815 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.010660 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b6cft" event={"ID":"379dbcd5-96e3-4563-ac73-7264f4b90d68","Type":"ContainerStarted","Data":"56855f9b3151df9d9aaa889ea6959cb38a22ff0093b650186072050bfcce269f"} Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.011055 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b6cft" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.024955 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ld4p5" podStartSLOduration=9.464120704 podStartE2EDuration="30.0249279s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.660104012 +0000 UTC m=+1099.433719040" lastFinishedPulling="2026-01-28 15:20:02.220911198 +0000 UTC m=+1119.994526236" observedRunningTime="2026-01-28 15:20:09.024286443 +0000 UTC m=+1126.797901491" watchObservedRunningTime="2026-01-28 15:20:09.0249279 +0000 UTC m=+1126.798542918" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.027157 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f" event={"ID":"2dee9e4e-11c8-4db6-a457-6f7bbf047f70","Type":"ContainerStarted","Data":"d0b6a51757642561a79d0e0ae2814ff20e81e277101dc64bece4e1d7de3c470b"} Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.027787 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.044252 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g" event={"ID":"b70555f3-c876-49fc-bd77-83efa82abac7","Type":"ContainerStarted","Data":"dff15ec93a93cb48d93e5a51af5bd1ab0b1d0a0286b17bd58395da152c7b88a6"} Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.044960 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.056519 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p6nxj" podStartSLOduration=12.904839109 podStartE2EDuration="30.056501874s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.183356355 +0000 UTC m=+1098.956971383" lastFinishedPulling="2026-01-28 15:19:58.33501912 +0000 UTC m=+1116.108634148" observedRunningTime="2026-01-28 15:20:09.05160662 +0000 UTC m=+1126.825221648" watchObservedRunningTime="2026-01-28 15:20:09.056501874 +0000 UTC m=+1126.830116902" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.091134 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j8x44" podStartSLOduration=10.463756888 podStartE2EDuration="30.09111764s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.139690311 +0000 UTC m=+1098.913305339" lastFinishedPulling="2026-01-28 15:20:00.767051063 +0000 UTC m=+1118.540666091" observedRunningTime="2026-01-28 15:20:09.089462605 +0000 UTC m=+1126.863077633" watchObservedRunningTime="2026-01-28 15:20:09.09111764 +0000 UTC m=+1126.864732668" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.125663 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qbfns" podStartSLOduration=9.638882973 podStartE2EDuration="30.125642734s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.734028894 +0000 UTC m=+1099.507643922" lastFinishedPulling="2026-01-28 15:20:02.220788655 +0000 UTC m=+1119.994403683" observedRunningTime="2026-01-28 15:20:09.120308408 +0000 UTC m=+1126.893923446" watchObservedRunningTime="2026-01-28 15:20:09.125642734 +0000 UTC m=+1126.899257752" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.174866 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" podStartSLOduration=24.751219544 podStartE2EDuration="30.1748472s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:20:02.229463912 +0000 UTC m=+1120.003078940" lastFinishedPulling="2026-01-28 15:20:07.653091568 +0000 UTC m=+1125.426706596" observedRunningTime="2026-01-28 15:20:09.172368072 +0000 UTC m=+1126.945983100" watchObservedRunningTime="2026-01-28 15:20:09.1748472 +0000 UTC m=+1126.948462228" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.207559 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jnhg7" podStartSLOduration=13.026151357 podStartE2EDuration="30.207541954s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.152948584 +0000 UTC m=+1098.926563612" lastFinishedPulling="2026-01-28 15:19:58.334339181 +0000 UTC m=+1116.107954209" observedRunningTime="2026-01-28 15:20:09.203033011 +0000 UTC m=+1126.976648039" watchObservedRunningTime="2026-01-28 15:20:09.207541954 +0000 UTC m=+1126.981156982" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.245798 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" podStartSLOduration=4.520398401 podStartE2EDuration="30.245778309s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.867801161 +0000 UTC m=+1099.641416189" lastFinishedPulling="2026-01-28 15:20:07.593181069 +0000 UTC m=+1125.366796097" observedRunningTime="2026-01-28 15:20:09.240080093 +0000 UTC m=+1127.013695121" watchObservedRunningTime="2026-01-28 15:20:09.245778309 +0000 UTC m=+1127.019393337" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.359163 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-q9t8p" podStartSLOduration=7.455794278 podStartE2EDuration="30.35914463s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.922578819 +0000 UTC m=+1099.696193847" lastFinishedPulling="2026-01-28 15:20:04.825929171 +0000 UTC m=+1122.599544199" observedRunningTime="2026-01-28 15:20:09.316284867 +0000 UTC m=+1127.089899895" watchObservedRunningTime="2026-01-28 15:20:09.35914463 +0000 UTC m=+1127.132759648" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.396301 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g" podStartSLOduration=7.470836548 podStartE2EDuration="30.396281485s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.90068805 +0000 UTC m=+1099.674303068" lastFinishedPulling="2026-01-28 15:20:04.826132977 +0000 UTC m=+1122.599748005" observedRunningTime="2026-01-28 15:20:09.359588882 +0000 UTC m=+1127.133203910" watchObservedRunningTime="2026-01-28 15:20:09.396281485 +0000 UTC m=+1127.169896503" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.443211 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vdcjn" podStartSLOduration=13.075444935 podStartE2EDuration="30.443195668s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:40.966425064 +0000 UTC m=+1098.740040092" lastFinishedPulling="2026-01-28 15:19:58.334175787 +0000 UTC m=+1116.107790825" observedRunningTime="2026-01-28 15:20:09.39758425 +0000 UTC m=+1127.171199278" watchObservedRunningTime="2026-01-28 15:20:09.443195668 +0000 UTC m=+1127.216810686" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.447707 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f" podStartSLOduration=7.525836433 podStartE2EDuration="30.44769038s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.905489502 +0000 UTC m=+1099.679104530" lastFinishedPulling="2026-01-28 15:20:04.827343449 +0000 UTC m=+1122.600958477" observedRunningTime="2026-01-28 15:20:09.441533772 +0000 UTC m=+1127.215148830" watchObservedRunningTime="2026-01-28 15:20:09.44769038 +0000 UTC m=+1127.221305408" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.460923 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8" podStartSLOduration=4.787614637 podStartE2EDuration="30.460898902s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.901157523 +0000 UTC m=+1099.674772551" lastFinishedPulling="2026-01-28 15:20:07.574441788 +0000 UTC m=+1125.348056816" observedRunningTime="2026-01-28 15:20:09.456651716 +0000 UTC m=+1127.230266744" watchObservedRunningTime="2026-01-28 15:20:09.460898902 +0000 UTC m=+1127.234513930" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.520092 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b6cft" podStartSLOduration=10.032916107 podStartE2EDuration="30.52007304s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.733686113 +0000 UTC m=+1099.507301141" lastFinishedPulling="2026-01-28 15:20:02.220843056 +0000 UTC m=+1119.994458074" observedRunningTime="2026-01-28 15:20:09.484560989 +0000 UTC m=+1127.258176037" watchObservedRunningTime="2026-01-28 15:20:09.52007304 +0000 UTC m=+1127.293688068" Jan 28 15:20:09 crc kubenswrapper[4893]: I0128 15:20:09.520514 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s" podStartSLOduration=4.229722742 podStartE2EDuration="30.520507571s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.792547793 +0000 UTC m=+1099.566162821" lastFinishedPulling="2026-01-28 15:20:08.083332622 +0000 UTC m=+1125.856947650" observedRunningTime="2026-01-28 15:20:09.516996816 +0000 UTC m=+1127.290611844" watchObservedRunningTime="2026-01-28 15:20:09.520507571 +0000 UTC m=+1127.294122589" Jan 28 15:20:10 crc kubenswrapper[4893]: I0128 15:20:10.020371 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-jfx6g" Jan 28 15:20:11 crc kubenswrapper[4893]: I0128 15:20:11.100811 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm" event={"ID":"d578cfaa-0b09-476e-9cd0-abd3d6274bd7","Type":"ContainerStarted","Data":"a2875bd7da448c8578a1d34c6edfd062c9d6864c4c186b7a3cf7910c6963a77c"} Jan 28 15:20:11 crc kubenswrapper[4893]: I0128 15:20:11.101946 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm" Jan 28 15:20:11 crc kubenswrapper[4893]: I0128 15:20:11.105076 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6" event={"ID":"e1e458d4-37a1-4111-9e2d-fa49cbdd9e08","Type":"ContainerStarted","Data":"ebefedddd60268eedadf6f46974686bcb94862feae870e6af42cd904a77bb999"} Jan 28 15:20:11 crc kubenswrapper[4893]: I0128 15:20:11.105623 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6" Jan 28 15:20:11 crc kubenswrapper[4893]: I0128 15:20:11.128568 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm" podStartSLOduration=3.855537109 podStartE2EDuration="32.128539012s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.67504461 +0000 UTC m=+1099.448659638" lastFinishedPulling="2026-01-28 15:20:09.948046523 +0000 UTC m=+1127.721661541" observedRunningTime="2026-01-28 15:20:11.119892236 +0000 UTC m=+1128.893507284" watchObservedRunningTime="2026-01-28 15:20:11.128539012 +0000 UTC m=+1128.902154030" Jan 28 15:20:11 crc kubenswrapper[4893]: I0128 15:20:11.144307 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6" podStartSLOduration=3.996940167 podStartE2EDuration="32.144285663s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.805773195 +0000 UTC m=+1099.579388223" lastFinishedPulling="2026-01-28 15:20:09.953118701 +0000 UTC m=+1127.726733719" observedRunningTime="2026-01-28 15:20:11.138289909 +0000 UTC m=+1128.911904947" watchObservedRunningTime="2026-01-28 15:20:11.144285663 +0000 UTC m=+1128.917900691" Jan 28 15:20:11 crc kubenswrapper[4893]: I0128 15:20:11.933676 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt\" (UID: \"bfe9e7f0-b5aa-48a6-9487-e1765752c644\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:20:11 crc kubenswrapper[4893]: I0128 15:20:11.940578 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bfe9e7f0-b5aa-48a6-9487-e1765752c644-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt\" (UID: \"bfe9e7f0-b5aa-48a6-9487-e1765752c644\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:20:12 crc kubenswrapper[4893]: I0128 15:20:12.118751 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:20:12 crc kubenswrapper[4893]: I0128 15:20:12.452582 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:20:12 crc kubenswrapper[4893]: I0128 15:20:12.452970 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:20:12 crc kubenswrapper[4893]: I0128 15:20:12.463549 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-webhook-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:20:12 crc kubenswrapper[4893]: I0128 15:20:12.473523 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24fb3958-2b40-4b9d-90ee-591dafc3987e-metrics-certs\") pod \"openstack-operator-controller-manager-5fd66b5d9c-j5x2h\" (UID: \"24fb3958-2b40-4b9d-90ee-591dafc3987e\") " pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:20:12 crc kubenswrapper[4893]: I0128 15:20:12.557554 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt"] Jan 28 15:20:12 crc kubenswrapper[4893]: I0128 15:20:12.571448 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:20:13 crc kubenswrapper[4893]: I0128 15:20:13.033346 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h"] Jan 28 15:20:13 crc kubenswrapper[4893]: I0128 15:20:13.118702 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" event={"ID":"bfe9e7f0-b5aa-48a6-9487-e1765752c644","Type":"ContainerStarted","Data":"cec74c9cf9567c6da29b5bb9d307fe89f5de38f9bd1aaec32c5becb0c006a581"} Jan 28 15:20:13 crc kubenswrapper[4893]: I0128 15:20:13.120117 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" event={"ID":"24fb3958-2b40-4b9d-90ee-591dafc3987e","Type":"ContainerStarted","Data":"d88455fdfffcf9dfadcf12dcf86a4a624b7e8a86fda1bdad34ce7f0fd1c483c5"} Jan 28 15:20:15 crc kubenswrapper[4893]: I0128 15:20:15.871826 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-rg997" Jan 28 15:20:19 crc kubenswrapper[4893]: I0128 15:20:19.656257 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-p6nxj" Jan 28 15:20:19 crc kubenswrapper[4893]: I0128 15:20:19.667847 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-vdcjn" Jan 28 15:20:19 crc kubenswrapper[4893]: I0128 15:20:19.735824 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-jnhg7" Jan 28 15:20:19 crc kubenswrapper[4893]: I0128 15:20:19.851458 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-j8x44" Jan 28 15:20:19 crc kubenswrapper[4893]: I0128 15:20:19.938981 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-dqldg" Jan 28 15:20:20 crc kubenswrapper[4893]: I0128 15:20:20.079598 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nd8rm" Jan 28 15:20:20 crc kubenswrapper[4893]: I0128 15:20:20.162082 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-qbfns" Jan 28 15:20:20 crc kubenswrapper[4893]: I0128 15:20:20.185604 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-2qgj6" Jan 28 15:20:20 crc kubenswrapper[4893]: I0128 15:20:20.190406 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2" event={"ID":"7740f64d-b660-493b-b3f5-1041a0ce3061","Type":"ContainerStarted","Data":"6d0f74e9d30b11d9916107178a549c682f9c096feeae26301fbd7500780e9e3c"} Jan 28 15:20:20 crc kubenswrapper[4893]: I0128 15:20:20.190993 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2" Jan 28 15:20:20 crc kubenswrapper[4893]: I0128 15:20:20.247906 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2" podStartSLOduration=11.66218169 podStartE2EDuration="41.247889427s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.843913218 +0000 UTC m=+1099.617528246" lastFinishedPulling="2026-01-28 15:20:11.429620945 +0000 UTC m=+1129.203235983" observedRunningTime="2026-01-28 15:20:20.229877474 +0000 UTC m=+1138.003492502" watchObservedRunningTime="2026-01-28 15:20:20.247889427 +0000 UTC m=+1138.021504455" Jan 28 15:20:20 crc kubenswrapper[4893]: I0128 15:20:20.253739 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-b6cft" Jan 28 15:20:20 crc kubenswrapper[4893]: I0128 15:20:20.366586 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ld4p5" Jan 28 15:20:20 crc kubenswrapper[4893]: I0128 15:20:20.507064 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-b276g" Jan 28 15:20:20 crc kubenswrapper[4893]: I0128 15:20:20.516547 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" Jan 28 15:20:20 crc kubenswrapper[4893]: I0128 15:20:20.565181 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-bnr2s" Jan 28 15:20:20 crc kubenswrapper[4893]: I0128 15:20:20.657257 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-zjrm8" Jan 28 15:20:20 crc kubenswrapper[4893]: I0128 15:20:20.714573 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-bsh7f" Jan 28 15:20:20 crc kubenswrapper[4893]: I0128 15:20:20.758229 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-q9t8p" Jan 28 15:20:21 crc kubenswrapper[4893]: I0128 15:20:21.219863 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" event={"ID":"24fb3958-2b40-4b9d-90ee-591dafc3987e","Type":"ContainerStarted","Data":"78d338630337bcd7251a29e1b9464c3070c5c70c5540910f0c12d02580eb06ea"} Jan 28 15:20:21 crc kubenswrapper[4893]: I0128 15:20:21.249331 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" podStartSLOduration=41.24930783 podStartE2EDuration="41.24930783s" podCreationTimestamp="2026-01-28 15:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:21.241099256 +0000 UTC m=+1139.014714284" watchObservedRunningTime="2026-01-28 15:20:21.24930783 +0000 UTC m=+1139.022922858" Jan 28 15:20:22 crc kubenswrapper[4893]: I0128 15:20:22.226405 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:20:26 crc kubenswrapper[4893]: I0128 15:20:26.262658 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" event={"ID":"bfe9e7f0-b5aa-48a6-9487-e1765752c644","Type":"ContainerStarted","Data":"f925bb768067b9389d9d56ba21e8f5150ed6178df6db741e5c15c812de6d7d37"} Jan 28 15:20:26 crc kubenswrapper[4893]: I0128 15:20:26.263255 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:20:26 crc kubenswrapper[4893]: I0128 15:20:26.266932 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm" event={"ID":"4179ac2f-dd41-4cd3-8558-6daba8252582","Type":"ContainerStarted","Data":"e5a54d149337ac00ae326c6a2ca005ebe46a4a0f92eb70f229ec8bc5bf671bc3"} Jan 28 15:20:26 crc kubenswrapper[4893]: I0128 15:20:26.267188 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm" Jan 28 15:20:26 crc kubenswrapper[4893]: I0128 15:20:26.291446 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" podStartSLOduration=34.666479441 podStartE2EDuration="47.291428305s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:20:12.563192562 +0000 UTC m=+1130.336807590" lastFinishedPulling="2026-01-28 15:20:25.188141426 +0000 UTC m=+1142.961756454" observedRunningTime="2026-01-28 15:20:26.284597708 +0000 UTC m=+1144.058212766" watchObservedRunningTime="2026-01-28 15:20:26.291428305 +0000 UTC m=+1144.065043333" Jan 28 15:20:26 crc kubenswrapper[4893]: I0128 15:20:26.312737 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm" podStartSLOduration=3.685578293 podStartE2EDuration="47.312719267s" podCreationTimestamp="2026-01-28 15:19:39 +0000 UTC" firstStartedPulling="2026-01-28 15:19:41.497128305 +0000 UTC m=+1099.270743333" lastFinishedPulling="2026-01-28 15:20:25.124269269 +0000 UTC m=+1142.897884307" observedRunningTime="2026-01-28 15:20:26.312386757 +0000 UTC m=+1144.086001825" watchObservedRunningTime="2026-01-28 15:20:26.312719267 +0000 UTC m=+1144.086334305" Jan 28 15:20:30 crc kubenswrapper[4893]: I0128 15:20:30.012094 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4rgm2" Jan 28 15:20:30 crc kubenswrapper[4893]: I0128 15:20:30.093832 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-dlrsm" Jan 28 15:20:32 crc kubenswrapper[4893]: I0128 15:20:32.125401 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt" Jan 28 15:20:32 crc kubenswrapper[4893]: I0128 15:20:32.578882 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5fd66b5d9c-j5x2h" Jan 28 15:20:40 crc kubenswrapper[4893]: I0128 15:20:40.916385 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 28 15:20:40 crc kubenswrapper[4893]: I0128 15:20:40.920441 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:40 crc kubenswrapper[4893]: I0128 15:20:40.928042 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-server-conf" Jan 28 15:20:40 crc kubenswrapper[4893]: I0128 15:20:40.928500 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-server-dockercfg-c57s6" Jan 28 15:20:40 crc kubenswrapper[4893]: I0128 15:20:40.928716 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-plugins-conf" Jan 28 15:20:40 crc kubenswrapper[4893]: I0128 15:20:40.928917 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-default-user" Jan 28 15:20:40 crc kubenswrapper[4893]: I0128 15:20:40.929131 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openshift-service-ca.crt" Jan 28 15:20:40 crc kubenswrapper[4893]: I0128 15:20:40.932692 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"kube-root-ca.crt" Jan 28 15:20:40 crc kubenswrapper[4893]: I0128 15:20:40.934342 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-erlang-cookie" Jan 28 15:20:40 crc kubenswrapper[4893]: I0128 15:20:40.952706 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.099053 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01a81616-675d-43ec-acb2-7a4541b96771-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.099129 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b7a11661-a0b3-46b6-b51a-692e85c3741b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7a11661-a0b3-46b6-b51a-692e85c3741b\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.099255 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01a81616-675d-43ec-acb2-7a4541b96771-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.099412 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr5ql\" (UniqueName: \"kubernetes.io/projected/01a81616-675d-43ec-acb2-7a4541b96771-kube-api-access-wr5ql\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.099448 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01a81616-675d-43ec-acb2-7a4541b96771-pod-info\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.099491 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01a81616-675d-43ec-acb2-7a4541b96771-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.099521 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01a81616-675d-43ec-acb2-7a4541b96771-server-conf\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.099537 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01a81616-675d-43ec-acb2-7a4541b96771-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.099571 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01a81616-675d-43ec-acb2-7a4541b96771-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.200517 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b7a11661-a0b3-46b6-b51a-692e85c3741b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7a11661-a0b3-46b6-b51a-692e85c3741b\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.200582 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01a81616-675d-43ec-acb2-7a4541b96771-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.200628 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr5ql\" (UniqueName: \"kubernetes.io/projected/01a81616-675d-43ec-acb2-7a4541b96771-kube-api-access-wr5ql\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.200645 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01a81616-675d-43ec-acb2-7a4541b96771-pod-info\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.200663 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01a81616-675d-43ec-acb2-7a4541b96771-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.200682 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01a81616-675d-43ec-acb2-7a4541b96771-server-conf\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.200697 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01a81616-675d-43ec-acb2-7a4541b96771-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.200715 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01a81616-675d-43ec-acb2-7a4541b96771-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.200745 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01a81616-675d-43ec-acb2-7a4541b96771-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.202334 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01a81616-675d-43ec-acb2-7a4541b96771-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.203436 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01a81616-675d-43ec-acb2-7a4541b96771-server-conf\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.204095 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01a81616-675d-43ec-acb2-7a4541b96771-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.204304 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01a81616-675d-43ec-acb2-7a4541b96771-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.206216 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01a81616-675d-43ec-acb2-7a4541b96771-pod-info\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.207719 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01a81616-675d-43ec-acb2-7a4541b96771-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.211152 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01a81616-675d-43ec-acb2-7a4541b96771-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.218865 4893 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.218916 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b7a11661-a0b3-46b6-b51a-692e85c3741b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7a11661-a0b3-46b6-b51a-692e85c3741b\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/26edfeec84422f046edecd9748477765e84dc592a8d414d16f108937cbde8c6d/globalmount\"" pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.240059 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr5ql\" (UniqueName: \"kubernetes.io/projected/01a81616-675d-43ec-acb2-7a4541b96771-kube-api-access-wr5ql\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.258174 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.259353 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.266699 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-default-user" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.266837 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-erlang-cookie" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.266955 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-plugins-conf" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.267208 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-conf" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.267624 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-dockercfg-hd8t7" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.289394 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.395250 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b7a11661-a0b3-46b6-b51a-692e85c3741b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b7a11661-a0b3-46b6-b51a-692e85c3741b\") pod \"rabbitmq-server-0\" (UID: \"01a81616-675d-43ec-acb2-7a4541b96771\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.403170 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.403252 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.403315 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.403333 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z2rr\" (UniqueName: \"kubernetes.io/projected/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-kube-api-access-4z2rr\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.403370 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.403394 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.403426 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-af41105e-6246-478b-818e-269203e5223c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af41105e-6246-478b-818e-269203e5223c\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.403530 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.403596 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.505186 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.505240 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.505283 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.505303 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z2rr\" (UniqueName: \"kubernetes.io/projected/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-kube-api-access-4z2rr\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.505322 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.505340 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.505367 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-af41105e-6246-478b-818e-269203e5223c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af41105e-6246-478b-818e-269203e5223c\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.505395 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.505442 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.506210 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.506241 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.506420 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.506797 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.507880 4893 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.507913 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-af41105e-6246-478b-818e-269203e5223c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af41105e-6246-478b-818e-269203e5223c\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fbe1d05ad4f11e37294a6757ee75ed7877194dd425b79c01febc3910fe18608c/globalmount\"" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.508533 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.510853 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.514515 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.529136 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z2rr\" (UniqueName: \"kubernetes.io/projected/dcd1c126-70b7-46e1-8226-bc7dc353ecdb-kube-api-access-4z2rr\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.545948 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.553554 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.555826 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-plugins-conf" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.556056 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-server-conf" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.556202 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-erlang-cookie" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.556241 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-af41105e-6246-478b-818e-269203e5223c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af41105e-6246-478b-818e-269203e5223c\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"dcd1c126-70b7-46e1-8226-bc7dc353ecdb\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.556609 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-server-dockercfg-qwcnw" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.556749 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-default-user" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.559286 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.559638 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.642035 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.712108 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67b2b466-ebc4-41d8-8b96-a285eb0609f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.712562 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67b2b466-ebc4-41d8-8b96-a285eb0609f5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.712605 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67b2b466-ebc4-41d8-8b96-a285eb0609f5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.712650 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjjmd\" (UniqueName: \"kubernetes.io/projected/67b2b466-ebc4-41d8-8b96-a285eb0609f5-kube-api-access-bjjmd\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.712690 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67b2b466-ebc4-41d8-8b96-a285eb0609f5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.712711 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67b2b466-ebc4-41d8-8b96-a285eb0609f5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.712730 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67b2b466-ebc4-41d8-8b96-a285eb0609f5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.712755 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0044f15a-3826-4019-aa3c-6e2127f25332\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0044f15a-3826-4019-aa3c-6e2127f25332\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.712774 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67b2b466-ebc4-41d8-8b96-a285eb0609f5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.814068 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjjmd\" (UniqueName: \"kubernetes.io/projected/67b2b466-ebc4-41d8-8b96-a285eb0609f5-kube-api-access-bjjmd\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.814147 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67b2b466-ebc4-41d8-8b96-a285eb0609f5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.814176 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67b2b466-ebc4-41d8-8b96-a285eb0609f5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.814194 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67b2b466-ebc4-41d8-8b96-a285eb0609f5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.815065 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67b2b466-ebc4-41d8-8b96-a285eb0609f5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.814223 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0044f15a-3826-4019-aa3c-6e2127f25332\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0044f15a-3826-4019-aa3c-6e2127f25332\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.815163 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67b2b466-ebc4-41d8-8b96-a285eb0609f5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.815198 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67b2b466-ebc4-41d8-8b96-a285eb0609f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.815260 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67b2b466-ebc4-41d8-8b96-a285eb0609f5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.815320 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67b2b466-ebc4-41d8-8b96-a285eb0609f5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.815827 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67b2b466-ebc4-41d8-8b96-a285eb0609f5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.817037 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67b2b466-ebc4-41d8-8b96-a285eb0609f5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.817457 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67b2b466-ebc4-41d8-8b96-a285eb0609f5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.819002 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67b2b466-ebc4-41d8-8b96-a285eb0609f5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.821002 4893 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.821030 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0044f15a-3826-4019-aa3c-6e2127f25332\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0044f15a-3826-4019-aa3c-6e2127f25332\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/95236fc29d5a10aed72672755a16d3dbe5fd2ecc86aa4bbbeccb8b9007740298/globalmount\"" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.822972 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67b2b466-ebc4-41d8-8b96-a285eb0609f5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.831159 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjjmd\" (UniqueName: \"kubernetes.io/projected/67b2b466-ebc4-41d8-8b96-a285eb0609f5-kube-api-access-bjjmd\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.840239 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67b2b466-ebc4-41d8-8b96-a285eb0609f5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.860858 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0044f15a-3826-4019-aa3c-6e2127f25332\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0044f15a-3826-4019-aa3c-6e2127f25332\") pod \"rabbitmq-cell1-server-0\" (UID: \"67b2b466-ebc4-41d8-8b96-a285eb0609f5\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.876132 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.952162 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 28 15:20:41 crc kubenswrapper[4893]: W0128 15:20:41.959421 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcd1c126_70b7_46e1_8226_bc7dc353ecdb.slice/crio-ddce2d18f87c19f4c90a57baa0fdb88914fb422a34edca45ad23d559f376169b WatchSource:0}: Error finding container ddce2d18f87c19f4c90a57baa0fdb88914fb422a34edca45ad23d559f376169b: Status 404 returned error can't find the container with id ddce2d18f87c19f4c90a57baa0fdb88914fb422a34edca45ad23d559f376169b Jan 28 15:20:41 crc kubenswrapper[4893]: I0128 15:20:41.962633 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.048215 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.362996 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 28 15:20:42 crc kubenswrapper[4893]: W0128 15:20:42.371577 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67b2b466_ebc4_41d8_8b96_a285eb0609f5.slice/crio-41a560a828ca05607b86f2ebffdaafd985cb3eebe909b3d592e113a9ca6b62aa WatchSource:0}: Error finding container 41a560a828ca05607b86f2ebffdaafd985cb3eebe909b3d592e113a9ca6b62aa: Status 404 returned error can't find the container with id 41a560a828ca05607b86f2ebffdaafd985cb3eebe909b3d592e113a9ca6b62aa Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.376296 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"dcd1c126-70b7-46e1-8226-bc7dc353ecdb","Type":"ContainerStarted","Data":"ddce2d18f87c19f4c90a57baa0fdb88914fb422a34edca45ad23d559f376169b"} Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.378798 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"01a81616-675d-43ec-acb2-7a4541b96771","Type":"ContainerStarted","Data":"cfab982318d278e792857f6672e9133e18458665f2952d9317029b999f2121f2"} Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.427431 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.428635 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.430609 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"memcached-memcached-dockercfg-h8zpk" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.439548 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"memcached-config-data" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.454250 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.528503 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4e4c3f33-0d15-4434-9940-21a310e1e272-kolla-config\") pod \"memcached-0\" (UID: \"4e4c3f33-0d15-4434-9940-21a310e1e272\") " pod="nova-kuttl-default/memcached-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.528561 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nh2b\" (UniqueName: \"kubernetes.io/projected/4e4c3f33-0d15-4434-9940-21a310e1e272-kube-api-access-7nh2b\") pod \"memcached-0\" (UID: \"4e4c3f33-0d15-4434-9940-21a310e1e272\") " pod="nova-kuttl-default/memcached-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.528617 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e4c3f33-0d15-4434-9940-21a310e1e272-config-data\") pod \"memcached-0\" (UID: \"4e4c3f33-0d15-4434-9940-21a310e1e272\") " pod="nova-kuttl-default/memcached-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.544452 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.546497 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.549039 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-scripts" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.551604 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-dockercfg-gzb4v" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.551692 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config-data" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.552497 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.557954 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-svc" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.563488 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"combined-ca-bundle" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.629559 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6cf03c71-da90-490d-8f3c-f5646a45b9d6-config-data-default\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.629628 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cf03c71-da90-490d-8f3c-f5646a45b9d6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.629690 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4e4c3f33-0d15-4434-9940-21a310e1e272-kolla-config\") pod \"memcached-0\" (UID: \"4e4c3f33-0d15-4434-9940-21a310e1e272\") " pod="nova-kuttl-default/memcached-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.629719 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nh2b\" (UniqueName: \"kubernetes.io/projected/4e4c3f33-0d15-4434-9940-21a310e1e272-kube-api-access-7nh2b\") pod \"memcached-0\" (UID: \"4e4c3f33-0d15-4434-9940-21a310e1e272\") " pod="nova-kuttl-default/memcached-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.629752 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6cf03c71-da90-490d-8f3c-f5646a45b9d6-kolla-config\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.629787 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7mj9\" (UniqueName: \"kubernetes.io/projected/6cf03c71-da90-490d-8f3c-f5646a45b9d6-kube-api-access-f7mj9\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.629814 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e4c3f33-0d15-4434-9940-21a310e1e272-config-data\") pod \"memcached-0\" (UID: \"4e4c3f33-0d15-4434-9940-21a310e1e272\") " pod="nova-kuttl-default/memcached-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.629842 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf03c71-da90-490d-8f3c-f5646a45b9d6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.629872 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cf03c71-da90-490d-8f3c-f5646a45b9d6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.629897 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6cf03c71-da90-490d-8f3c-f5646a45b9d6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.629927 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e90799aa-7492-4d01-9d67-a1995850b075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e90799aa-7492-4d01-9d67-a1995850b075\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.630940 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4e4c3f33-0d15-4434-9940-21a310e1e272-kolla-config\") pod \"memcached-0\" (UID: \"4e4c3f33-0d15-4434-9940-21a310e1e272\") " pod="nova-kuttl-default/memcached-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.631178 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e4c3f33-0d15-4434-9940-21a310e1e272-config-data\") pod \"memcached-0\" (UID: \"4e4c3f33-0d15-4434-9940-21a310e1e272\") " pod="nova-kuttl-default/memcached-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.658705 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nh2b\" (UniqueName: \"kubernetes.io/projected/4e4c3f33-0d15-4434-9940-21a310e1e272-kube-api-access-7nh2b\") pod \"memcached-0\" (UID: \"4e4c3f33-0d15-4434-9940-21a310e1e272\") " pod="nova-kuttl-default/memcached-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.731375 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7mj9\" (UniqueName: \"kubernetes.io/projected/6cf03c71-da90-490d-8f3c-f5646a45b9d6-kube-api-access-f7mj9\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.731767 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf03c71-da90-490d-8f3c-f5646a45b9d6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.731843 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cf03c71-da90-490d-8f3c-f5646a45b9d6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.732574 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6cf03c71-da90-490d-8f3c-f5646a45b9d6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.732693 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e90799aa-7492-4d01-9d67-a1995850b075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e90799aa-7492-4d01-9d67-a1995850b075\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.732757 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6cf03c71-da90-490d-8f3c-f5646a45b9d6-config-data-default\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.732857 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cf03c71-da90-490d-8f3c-f5646a45b9d6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.732996 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6cf03c71-da90-490d-8f3c-f5646a45b9d6-kolla-config\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.733746 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6cf03c71-da90-490d-8f3c-f5646a45b9d6-kolla-config\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.734022 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6cf03c71-da90-490d-8f3c-f5646a45b9d6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.734120 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6cf03c71-da90-490d-8f3c-f5646a45b9d6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.734918 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6cf03c71-da90-490d-8f3c-f5646a45b9d6-config-data-default\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.741692 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cf03c71-da90-490d-8f3c-f5646a45b9d6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.743675 4893 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.743714 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e90799aa-7492-4d01-9d67-a1995850b075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e90799aa-7492-4d01-9d67-a1995850b075\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d046e7316b9d190b4848e6f5162703be26d0c3ac2412d157c13edfb65dcccdec/globalmount\"" pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.744993 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6cf03c71-da90-490d-8f3c-f5646a45b9d6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.755596 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7mj9\" (UniqueName: \"kubernetes.io/projected/6cf03c71-da90-490d-8f3c-f5646a45b9d6-kube-api-access-f7mj9\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.757890 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.790959 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e90799aa-7492-4d01-9d67-a1995850b075\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e90799aa-7492-4d01-9d67-a1995850b075\") pod \"openstack-galera-0\" (UID: \"6cf03c71-da90-490d-8f3c-f5646a45b9d6\") " pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:42 crc kubenswrapper[4893]: I0128 15:20:42.895669 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.256494 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 28 15:20:43 crc kubenswrapper[4893]: W0128 15:20:43.267003 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e4c3f33_0d15_4434_9940_21a310e1e272.slice/crio-df211e774e00fa3192ed4f4c5c2efc2bf6054b0c4791d1666b469cc59bd0196e WatchSource:0}: Error finding container df211e774e00fa3192ed4f4c5c2efc2bf6054b0c4791d1666b469cc59bd0196e: Status 404 returned error can't find the container with id df211e774e00fa3192ed4f4c5c2efc2bf6054b0c4791d1666b469cc59bd0196e Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.387398 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"4e4c3f33-0d15-4434-9940-21a310e1e272","Type":"ContainerStarted","Data":"df211e774e00fa3192ed4f4c5c2efc2bf6054b0c4791d1666b469cc59bd0196e"} Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.391029 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"67b2b466-ebc4-41d8-8b96-a285eb0609f5","Type":"ContainerStarted","Data":"41a560a828ca05607b86f2ebffdaafd985cb3eebe909b3d592e113a9ca6b62aa"} Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.584981 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.689616 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.690901 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.693387 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-config-data" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.693907 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-cell1-svc" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.696210 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-cell1-dockercfg-hgchz" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.696632 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-scripts" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.708983 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.757371 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1418afdb-10ec-4cb7-853d-d0f755621625-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.757421 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94s8w\" (UniqueName: \"kubernetes.io/projected/1418afdb-10ec-4cb7-853d-d0f755621625-kube-api-access-94s8w\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.757506 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-014f15ca-f796-44a5-8054-ebdb0dddc8b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-014f15ca-f796-44a5-8054-ebdb0dddc8b9\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.757548 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1418afdb-10ec-4cb7-853d-d0f755621625-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.757576 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1418afdb-10ec-4cb7-853d-d0f755621625-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.757591 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1418afdb-10ec-4cb7-853d-d0f755621625-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.757614 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1418afdb-10ec-4cb7-853d-d0f755621625-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.757638 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1418afdb-10ec-4cb7-853d-d0f755621625-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.863096 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1418afdb-10ec-4cb7-853d-d0f755621625-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.863174 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1418afdb-10ec-4cb7-853d-d0f755621625-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.863199 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1418afdb-10ec-4cb7-853d-d0f755621625-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.863271 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1418afdb-10ec-4cb7-853d-d0f755621625-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.863629 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1418afdb-10ec-4cb7-853d-d0f755621625-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.863845 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1418afdb-10ec-4cb7-853d-d0f755621625-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.863885 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94s8w\" (UniqueName: \"kubernetes.io/projected/1418afdb-10ec-4cb7-853d-d0f755621625-kube-api-access-94s8w\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.863960 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-014f15ca-f796-44a5-8054-ebdb0dddc8b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-014f15ca-f796-44a5-8054-ebdb0dddc8b9\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.864199 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1418afdb-10ec-4cb7-853d-d0f755621625-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.864272 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1418afdb-10ec-4cb7-853d-d0f755621625-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.864903 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1418afdb-10ec-4cb7-853d-d0f755621625-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.866790 4893 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.866821 4893 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-014f15ca-f796-44a5-8054-ebdb0dddc8b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-014f15ca-f796-44a5-8054-ebdb0dddc8b9\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/540702ad5803d0026e0b53989df399e9f84daa4277c08ef2c52c2e55ff7911fb/globalmount\"" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.868567 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1418afdb-10ec-4cb7-853d-d0f755621625-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.874615 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1418afdb-10ec-4cb7-853d-d0f755621625-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.891330 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1418afdb-10ec-4cb7-853d-d0f755621625-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.895038 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94s8w\" (UniqueName: \"kubernetes.io/projected/1418afdb-10ec-4cb7-853d-d0f755621625-kube-api-access-94s8w\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:43 crc kubenswrapper[4893]: I0128 15:20:43.908763 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-014f15ca-f796-44a5-8054-ebdb0dddc8b9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-014f15ca-f796-44a5-8054-ebdb0dddc8b9\") pod \"openstack-cell1-galera-0\" (UID: \"1418afdb-10ec-4cb7-853d-d0f755621625\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:44 crc kubenswrapper[4893]: I0128 15:20:44.022222 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:20:44 crc kubenswrapper[4893]: I0128 15:20:44.410164 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"6cf03c71-da90-490d-8f3c-f5646a45b9d6","Type":"ContainerStarted","Data":"6c4afbb58640559b1a32edf526d1c5a34ae6767a119e9e895068f34c9c46e110"} Jan 28 15:20:44 crc kubenswrapper[4893]: I0128 15:20:44.477309 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 28 15:20:44 crc kubenswrapper[4893]: W0128 15:20:44.487722 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1418afdb_10ec_4cb7_853d_d0f755621625.slice/crio-052175cd0986b956dfb2778421ba6a4eca54f90e7f7eccaa2aa0b4081f25f59d WatchSource:0}: Error finding container 052175cd0986b956dfb2778421ba6a4eca54f90e7f7eccaa2aa0b4081f25f59d: Status 404 returned error can't find the container with id 052175cd0986b956dfb2778421ba6a4eca54f90e7f7eccaa2aa0b4081f25f59d Jan 28 15:20:45 crc kubenswrapper[4893]: I0128 15:20:45.424178 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"1418afdb-10ec-4cb7-853d-d0f755621625","Type":"ContainerStarted","Data":"052175cd0986b956dfb2778421ba6a4eca54f90e7f7eccaa2aa0b4081f25f59d"} Jan 28 15:20:57 crc kubenswrapper[4893]: E0128 15:20:57.228126 4893 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 28 15:20:57 crc kubenswrapper[4893]: E0128 15:20:57.229080 4893 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n598h55h7h5c5hf7h699h5fhf5h5b8h568hcdh58dh566hf5h5fdh578h675h67dh58fh6h5cch54ch558h68fh5d4hb8h645h5b5h5bfh687hf6hfq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nh2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_nova-kuttl-default(4e4c3f33-0d15-4434-9940-21a310e1e272): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:20:57 crc kubenswrapper[4893]: E0128 15:20:57.230307 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/memcached-0" podUID="4e4c3f33-0d15-4434-9940-21a310e1e272" Jan 28 15:20:57 crc kubenswrapper[4893]: I0128 15:20:57.517606 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"6cf03c71-da90-490d-8f3c-f5646a45b9d6","Type":"ContainerStarted","Data":"5fa0a5eb24af7610cd3fc7ea37ec428e25606f7ee4639683c33edcf03026a93d"} Jan 28 15:20:57 crc kubenswrapper[4893]: I0128 15:20:57.521099 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"1418afdb-10ec-4cb7-853d-d0f755621625","Type":"ContainerStarted","Data":"39582c9b095187da13e7e85571269122b94891c59d72bf095863e407b7271cfa"} Jan 28 15:20:57 crc kubenswrapper[4893]: E0128 15:20:57.523086 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="nova-kuttl-default/memcached-0" podUID="4e4c3f33-0d15-4434-9940-21a310e1e272" Jan 28 15:20:58 crc kubenswrapper[4893]: I0128 15:20:58.531413 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"dcd1c126-70b7-46e1-8226-bc7dc353ecdb","Type":"ContainerStarted","Data":"ee4f2aaec90a2a7044570546afb4c51006663e7cd5a35772f2f2951f41b45455"} Jan 28 15:20:58 crc kubenswrapper[4893]: I0128 15:20:58.535119 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"67b2b466-ebc4-41d8-8b96-a285eb0609f5","Type":"ContainerStarted","Data":"9e462ce55f992a4384897da48dc3170a1e646efc266afb19bf972aeb6bf0e6a3"} Jan 28 15:20:59 crc kubenswrapper[4893]: I0128 15:20:59.543828 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"01a81616-675d-43ec-acb2-7a4541b96771","Type":"ContainerStarted","Data":"904d17f4f3ea3ab55ddc8e3b2bb0d894ddd320674fb25d3d6bf38d29f17cca6c"} Jan 28 15:21:01 crc kubenswrapper[4893]: I0128 15:21:01.560243 4893 generic.go:334] "Generic (PLEG): container finished" podID="6cf03c71-da90-490d-8f3c-f5646a45b9d6" containerID="5fa0a5eb24af7610cd3fc7ea37ec428e25606f7ee4639683c33edcf03026a93d" exitCode=0 Jan 28 15:21:01 crc kubenswrapper[4893]: I0128 15:21:01.560334 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"6cf03c71-da90-490d-8f3c-f5646a45b9d6","Type":"ContainerDied","Data":"5fa0a5eb24af7610cd3fc7ea37ec428e25606f7ee4639683c33edcf03026a93d"} Jan 28 15:21:01 crc kubenswrapper[4893]: I0128 15:21:01.565807 4893 generic.go:334] "Generic (PLEG): container finished" podID="1418afdb-10ec-4cb7-853d-d0f755621625" containerID="39582c9b095187da13e7e85571269122b94891c59d72bf095863e407b7271cfa" exitCode=0 Jan 28 15:21:01 crc kubenswrapper[4893]: I0128 15:21:01.565862 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"1418afdb-10ec-4cb7-853d-d0f755621625","Type":"ContainerDied","Data":"39582c9b095187da13e7e85571269122b94891c59d72bf095863e407b7271cfa"} Jan 28 15:21:02 crc kubenswrapper[4893]: I0128 15:21:02.573969 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"6cf03c71-da90-490d-8f3c-f5646a45b9d6","Type":"ContainerStarted","Data":"d0db4a300cde6a563c03107c193edadfe655ee943371cd3075e75aeab77a68ea"} Jan 28 15:21:02 crc kubenswrapper[4893]: I0128 15:21:02.575604 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"1418afdb-10ec-4cb7-853d-d0f755621625","Type":"ContainerStarted","Data":"ff1d39b1d8f128569d7e198be1c06f2df9e976f0525b8011e53c56284da64fd3"} Jan 28 15:21:02 crc kubenswrapper[4893]: I0128 15:21:02.598985 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-galera-0" podStartSLOduration=7.935028003 podStartE2EDuration="21.598967757s" podCreationTimestamp="2026-01-28 15:20:41 +0000 UTC" firstStartedPulling="2026-01-28 15:20:43.61607926 +0000 UTC m=+1161.389694288" lastFinishedPulling="2026-01-28 15:20:57.280019014 +0000 UTC m=+1175.053634042" observedRunningTime="2026-01-28 15:21:02.593629581 +0000 UTC m=+1180.367244619" watchObservedRunningTime="2026-01-28 15:21:02.598967757 +0000 UTC m=+1180.372582775" Jan 28 15:21:02 crc kubenswrapper[4893]: I0128 15:21:02.624418 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-cell1-galera-0" podStartSLOduration=7.781526492 podStartE2EDuration="20.62439939s" podCreationTimestamp="2026-01-28 15:20:42 +0000 UTC" firstStartedPulling="2026-01-28 15:20:44.489872944 +0000 UTC m=+1162.263487972" lastFinishedPulling="2026-01-28 15:20:57.332745842 +0000 UTC m=+1175.106360870" observedRunningTime="2026-01-28 15:21:02.61706816 +0000 UTC m=+1180.390683178" watchObservedRunningTime="2026-01-28 15:21:02.62439939 +0000 UTC m=+1180.398014418" Jan 28 15:21:02 crc kubenswrapper[4893]: I0128 15:21:02.900933 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:21:02 crc kubenswrapper[4893]: I0128 15:21:02.900982 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:21:04 crc kubenswrapper[4893]: I0128 15:21:04.023847 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:21:04 crc kubenswrapper[4893]: I0128 15:21:04.024165 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:21:06 crc kubenswrapper[4893]: I0128 15:21:06.986654 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:21:07 crc kubenswrapper[4893]: I0128 15:21:07.075182 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-galera-0" Jan 28 15:21:08 crc kubenswrapper[4893]: I0128 15:21:08.121658 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:21:08 crc kubenswrapper[4893]: I0128 15:21:08.210955 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 28 15:21:11 crc kubenswrapper[4893]: I0128 15:21:11.632964 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-92st2"] Jan 28 15:21:11 crc kubenswrapper[4893]: I0128 15:21:11.634736 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-92st2" Jan 28 15:21:11 crc kubenswrapper[4893]: I0128 15:21:11.638314 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 28 15:21:11 crc kubenswrapper[4893]: I0128 15:21:11.645638 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-92st2"] Jan 28 15:21:11 crc kubenswrapper[4893]: I0128 15:21:11.695595 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72ed9ea9-0765-446e-898c-36cddb63c725-operator-scripts\") pod \"root-account-create-update-92st2\" (UID: \"72ed9ea9-0765-446e-898c-36cddb63c725\") " pod="nova-kuttl-default/root-account-create-update-92st2" Jan 28 15:21:11 crc kubenswrapper[4893]: I0128 15:21:11.695672 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k442r\" (UniqueName: \"kubernetes.io/projected/72ed9ea9-0765-446e-898c-36cddb63c725-kube-api-access-k442r\") pod \"root-account-create-update-92st2\" (UID: \"72ed9ea9-0765-446e-898c-36cddb63c725\") " pod="nova-kuttl-default/root-account-create-update-92st2" Jan 28 15:21:11 crc kubenswrapper[4893]: I0128 15:21:11.796892 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72ed9ea9-0765-446e-898c-36cddb63c725-operator-scripts\") pod \"root-account-create-update-92st2\" (UID: \"72ed9ea9-0765-446e-898c-36cddb63c725\") " pod="nova-kuttl-default/root-account-create-update-92st2" Jan 28 15:21:11 crc kubenswrapper[4893]: I0128 15:21:11.796976 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k442r\" (UniqueName: \"kubernetes.io/projected/72ed9ea9-0765-446e-898c-36cddb63c725-kube-api-access-k442r\") pod \"root-account-create-update-92st2\" (UID: \"72ed9ea9-0765-446e-898c-36cddb63c725\") " pod="nova-kuttl-default/root-account-create-update-92st2" Jan 28 15:21:11 crc kubenswrapper[4893]: I0128 15:21:11.798274 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72ed9ea9-0765-446e-898c-36cddb63c725-operator-scripts\") pod \"root-account-create-update-92st2\" (UID: \"72ed9ea9-0765-446e-898c-36cddb63c725\") " pod="nova-kuttl-default/root-account-create-update-92st2" Jan 28 15:21:11 crc kubenswrapper[4893]: I0128 15:21:11.816691 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k442r\" (UniqueName: \"kubernetes.io/projected/72ed9ea9-0765-446e-898c-36cddb63c725-kube-api-access-k442r\") pod \"root-account-create-update-92st2\" (UID: \"72ed9ea9-0765-446e-898c-36cddb63c725\") " pod="nova-kuttl-default/root-account-create-update-92st2" Jan 28 15:21:11 crc kubenswrapper[4893]: I0128 15:21:11.951889 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-92st2" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.266500 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-92st2"] Jan 28 15:21:12 crc kubenswrapper[4893]: W0128 15:21:12.276428 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72ed9ea9_0765_446e_898c_36cddb63c725.slice/crio-812e4e3c316f69672dad47256e2b9218cf6f60895770387a425d1286e30a190f WatchSource:0}: Error finding container 812e4e3c316f69672dad47256e2b9218cf6f60895770387a425d1286e30a190f: Status 404 returned error can't find the container with id 812e4e3c316f69672dad47256e2b9218cf6f60895770387a425d1286e30a190f Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.326455 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-create-fj4kz"] Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.328201 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-fj4kz" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.334405 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-fj4kz"] Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.406659 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dnpj\" (UniqueName: \"kubernetes.io/projected/ca2bcba5-5853-4f12-8bde-522e186d1839-kube-api-access-4dnpj\") pod \"keystone-db-create-fj4kz\" (UID: \"ca2bcba5-5853-4f12-8bde-522e186d1839\") " pod="nova-kuttl-default/keystone-db-create-fj4kz" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.406779 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2bcba5-5853-4f12-8bde-522e186d1839-operator-scripts\") pod \"keystone-db-create-fj4kz\" (UID: \"ca2bcba5-5853-4f12-8bde-522e186d1839\") " pod="nova-kuttl-default/keystone-db-create-fj4kz" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.423545 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bf54-account-create-update-zxqbm"] Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.424774 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.430076 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-db-secret" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.434306 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bf54-account-create-update-zxqbm"] Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.508227 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2bcba5-5853-4f12-8bde-522e186d1839-operator-scripts\") pod \"keystone-db-create-fj4kz\" (UID: \"ca2bcba5-5853-4f12-8bde-522e186d1839\") " pod="nova-kuttl-default/keystone-db-create-fj4kz" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.508321 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4724e828-4305-4fdc-9bec-0af263e7eed9-operator-scripts\") pod \"keystone-bf54-account-create-update-zxqbm\" (UID: \"4724e828-4305-4fdc-9bec-0af263e7eed9\") " pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.508370 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dnpj\" (UniqueName: \"kubernetes.io/projected/ca2bcba5-5853-4f12-8bde-522e186d1839-kube-api-access-4dnpj\") pod \"keystone-db-create-fj4kz\" (UID: \"ca2bcba5-5853-4f12-8bde-522e186d1839\") " pod="nova-kuttl-default/keystone-db-create-fj4kz" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.508423 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nkjh\" (UniqueName: \"kubernetes.io/projected/4724e828-4305-4fdc-9bec-0af263e7eed9-kube-api-access-9nkjh\") pod \"keystone-bf54-account-create-update-zxqbm\" (UID: \"4724e828-4305-4fdc-9bec-0af263e7eed9\") " pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.509334 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2bcba5-5853-4f12-8bde-522e186d1839-operator-scripts\") pod \"keystone-db-create-fj4kz\" (UID: \"ca2bcba5-5853-4f12-8bde-522e186d1839\") " pod="nova-kuttl-default/keystone-db-create-fj4kz" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.528338 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dnpj\" (UniqueName: \"kubernetes.io/projected/ca2bcba5-5853-4f12-8bde-522e186d1839-kube-api-access-4dnpj\") pod \"keystone-db-create-fj4kz\" (UID: \"ca2bcba5-5853-4f12-8bde-522e186d1839\") " pod="nova-kuttl-default/keystone-db-create-fj4kz" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.609462 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nkjh\" (UniqueName: \"kubernetes.io/projected/4724e828-4305-4fdc-9bec-0af263e7eed9-kube-api-access-9nkjh\") pod \"keystone-bf54-account-create-update-zxqbm\" (UID: \"4724e828-4305-4fdc-9bec-0af263e7eed9\") " pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.609632 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4724e828-4305-4fdc-9bec-0af263e7eed9-operator-scripts\") pod \"keystone-bf54-account-create-update-zxqbm\" (UID: \"4724e828-4305-4fdc-9bec-0af263e7eed9\") " pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.610363 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4724e828-4305-4fdc-9bec-0af263e7eed9-operator-scripts\") pod \"keystone-bf54-account-create-update-zxqbm\" (UID: \"4724e828-4305-4fdc-9bec-0af263e7eed9\") " pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.628115 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nkjh\" (UniqueName: \"kubernetes.io/projected/4724e828-4305-4fdc-9bec-0af263e7eed9-kube-api-access-9nkjh\") pod \"keystone-bf54-account-create-update-zxqbm\" (UID: \"4724e828-4305-4fdc-9bec-0af263e7eed9\") " pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.638884 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-92st2" event={"ID":"72ed9ea9-0765-446e-898c-36cddb63c725","Type":"ContainerStarted","Data":"d02d8e76e47cb85e8c3b2d52aa5014e5f34cf38741e43f49171677933749258e"} Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.638947 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-92st2" event={"ID":"72ed9ea9-0765-446e-898c-36cddb63c725","Type":"ContainerStarted","Data":"812e4e3c316f69672dad47256e2b9218cf6f60895770387a425d1286e30a190f"} Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.662456 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-create-kjd29"] Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.663645 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-kjd29" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.672327 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-fj4kz" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.678261 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/root-account-create-update-92st2" podStartSLOduration=1.678241957 podStartE2EDuration="1.678241957s" podCreationTimestamp="2026-01-28 15:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:12.657165982 +0000 UTC m=+1190.430781020" watchObservedRunningTime="2026-01-28 15:21:12.678241957 +0000 UTC m=+1190.451856985" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.687408 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-kjd29"] Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.711552 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bjgz\" (UniqueName: \"kubernetes.io/projected/491fbaf4-ae4b-42f4-a505-70d34407e7ef-kube-api-access-6bjgz\") pod \"placement-db-create-kjd29\" (UID: \"491fbaf4-ae4b-42f4-a505-70d34407e7ef\") " pod="nova-kuttl-default/placement-db-create-kjd29" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.711935 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491fbaf4-ae4b-42f4-a505-70d34407e7ef-operator-scripts\") pod \"placement-db-create-kjd29\" (UID: \"491fbaf4-ae4b-42f4-a505-70d34407e7ef\") " pod="nova-kuttl-default/placement-db-create-kjd29" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.728567 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-fbc8-account-create-update-5vsn8"] Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.729867 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.732966 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-db-secret" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.735871 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-fbc8-account-create-update-5vsn8"] Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.744439 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.813746 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491fbaf4-ae4b-42f4-a505-70d34407e7ef-operator-scripts\") pod \"placement-db-create-kjd29\" (UID: \"491fbaf4-ae4b-42f4-a505-70d34407e7ef\") " pod="nova-kuttl-default/placement-db-create-kjd29" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.813838 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt7jg\" (UniqueName: \"kubernetes.io/projected/663785fa-d819-4227-a09f-0a7d2b72e7fe-kube-api-access-wt7jg\") pod \"placement-fbc8-account-create-update-5vsn8\" (UID: \"663785fa-d819-4227-a09f-0a7d2b72e7fe\") " pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.813884 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663785fa-d819-4227-a09f-0a7d2b72e7fe-operator-scripts\") pod \"placement-fbc8-account-create-update-5vsn8\" (UID: \"663785fa-d819-4227-a09f-0a7d2b72e7fe\") " pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.813933 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bjgz\" (UniqueName: \"kubernetes.io/projected/491fbaf4-ae4b-42f4-a505-70d34407e7ef-kube-api-access-6bjgz\") pod \"placement-db-create-kjd29\" (UID: \"491fbaf4-ae4b-42f4-a505-70d34407e7ef\") " pod="nova-kuttl-default/placement-db-create-kjd29" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.815055 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491fbaf4-ae4b-42f4-a505-70d34407e7ef-operator-scripts\") pod \"placement-db-create-kjd29\" (UID: \"491fbaf4-ae4b-42f4-a505-70d34407e7ef\") " pod="nova-kuttl-default/placement-db-create-kjd29" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.855463 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bjgz\" (UniqueName: \"kubernetes.io/projected/491fbaf4-ae4b-42f4-a505-70d34407e7ef-kube-api-access-6bjgz\") pod \"placement-db-create-kjd29\" (UID: \"491fbaf4-ae4b-42f4-a505-70d34407e7ef\") " pod="nova-kuttl-default/placement-db-create-kjd29" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.915756 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt7jg\" (UniqueName: \"kubernetes.io/projected/663785fa-d819-4227-a09f-0a7d2b72e7fe-kube-api-access-wt7jg\") pod \"placement-fbc8-account-create-update-5vsn8\" (UID: \"663785fa-d819-4227-a09f-0a7d2b72e7fe\") " pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.916082 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663785fa-d819-4227-a09f-0a7d2b72e7fe-operator-scripts\") pod \"placement-fbc8-account-create-update-5vsn8\" (UID: \"663785fa-d819-4227-a09f-0a7d2b72e7fe\") " pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.918618 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663785fa-d819-4227-a09f-0a7d2b72e7fe-operator-scripts\") pod \"placement-fbc8-account-create-update-5vsn8\" (UID: \"663785fa-d819-4227-a09f-0a7d2b72e7fe\") " pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.950245 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt7jg\" (UniqueName: \"kubernetes.io/projected/663785fa-d819-4227-a09f-0a7d2b72e7fe-kube-api-access-wt7jg\") pod \"placement-fbc8-account-create-update-5vsn8\" (UID: \"663785fa-d819-4227-a09f-0a7d2b72e7fe\") " pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" Jan 28 15:21:12 crc kubenswrapper[4893]: I0128 15:21:12.982224 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-kjd29" Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.046451 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.187660 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-fj4kz"] Jan 28 15:21:13 crc kubenswrapper[4893]: W0128 15:21:13.204666 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca2bcba5_5853_4f12_8bde_522e186d1839.slice/crio-7c4821b81034d753ef3622f4d64865d5eff7dbd7d31694e90b67f299b16cb4bf WatchSource:0}: Error finding container 7c4821b81034d753ef3622f4d64865d5eff7dbd7d31694e90b67f299b16cb4bf: Status 404 returned error can't find the container with id 7c4821b81034d753ef3622f4d64865d5eff7dbd7d31694e90b67f299b16cb4bf Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.265355 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bf54-account-create-update-zxqbm"] Jan 28 15:21:13 crc kubenswrapper[4893]: W0128 15:21:13.265830 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4724e828_4305_4fdc_9bec_0af263e7eed9.slice/crio-eb13acb286cfae79cf30b4b32785aa67ae813addb6bc100e3c6418a0f1abf4b4 WatchSource:0}: Error finding container eb13acb286cfae79cf30b4b32785aa67ae813addb6bc100e3c6418a0f1abf4b4: Status 404 returned error can't find the container with id eb13acb286cfae79cf30b4b32785aa67ae813addb6bc100e3c6418a0f1abf4b4 Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.414279 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-kjd29"] Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.494945 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-fbc8-account-create-update-5vsn8"] Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.647247 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" event={"ID":"4724e828-4305-4fdc-9bec-0af263e7eed9","Type":"ContainerStarted","Data":"102f86e791e595c41921982d56f27af9e40d62121b68af2de2f1b5ced550ef25"} Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.647301 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" event={"ID":"4724e828-4305-4fdc-9bec-0af263e7eed9","Type":"ContainerStarted","Data":"eb13acb286cfae79cf30b4b32785aa67ae813addb6bc100e3c6418a0f1abf4b4"} Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.649917 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-fj4kz" event={"ID":"ca2bcba5-5853-4f12-8bde-522e186d1839","Type":"ContainerStarted","Data":"ca718ddd9e8cb38fb4d1ad7c50570e51c05f3d9f7d4934b889b7ff676520fb34"} Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.649972 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-fj4kz" event={"ID":"ca2bcba5-5853-4f12-8bde-522e186d1839","Type":"ContainerStarted","Data":"7c4821b81034d753ef3622f4d64865d5eff7dbd7d31694e90b67f299b16cb4bf"} Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.652012 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"4e4c3f33-0d15-4434-9940-21a310e1e272","Type":"ContainerStarted","Data":"1c63503688a915fa5951a7c114dc0b8575b4c00b48c1875fb55b3cb0b7839d26"} Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.652221 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/memcached-0" Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.653587 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" event={"ID":"663785fa-d819-4227-a09f-0a7d2b72e7fe","Type":"ContainerStarted","Data":"2e5a86aea1d478fc50596210b63ecfa9b4f3ea61a714dafed280d35645141fc2"} Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.653629 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" event={"ID":"663785fa-d819-4227-a09f-0a7d2b72e7fe","Type":"ContainerStarted","Data":"7319ff430d200501a444ff55d94797b97694e3636687b1d35c6442d99fb4ab30"} Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.655202 4893 generic.go:334] "Generic (PLEG): container finished" podID="72ed9ea9-0765-446e-898c-36cddb63c725" containerID="d02d8e76e47cb85e8c3b2d52aa5014e5f34cf38741e43f49171677933749258e" exitCode=0 Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.655252 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-92st2" event={"ID":"72ed9ea9-0765-446e-898c-36cddb63c725","Type":"ContainerDied","Data":"d02d8e76e47cb85e8c3b2d52aa5014e5f34cf38741e43f49171677933749258e"} Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.656991 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-kjd29" event={"ID":"491fbaf4-ae4b-42f4-a505-70d34407e7ef","Type":"ContainerStarted","Data":"825c2f1a27ab9e2362363d6606344f75e84fe2c64f7054a93d740a2bf72c9873"} Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.657024 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-kjd29" event={"ID":"491fbaf4-ae4b-42f4-a505-70d34407e7ef","Type":"ContainerStarted","Data":"746055bea3df6370099ec859b72efd192f2eb31cc60991befbc160290af19f8c"} Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.667804 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" podStartSLOduration=1.667789094 podStartE2EDuration="1.667789094s" podCreationTimestamp="2026-01-28 15:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:13.666525899 +0000 UTC m=+1191.440140927" watchObservedRunningTime="2026-01-28 15:21:13.667789094 +0000 UTC m=+1191.441404122" Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.695016 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" podStartSLOduration=1.694991895 podStartE2EDuration="1.694991895s" podCreationTimestamp="2026-01-28 15:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:13.686848754 +0000 UTC m=+1191.460463782" watchObservedRunningTime="2026-01-28 15:21:13.694991895 +0000 UTC m=+1191.468606923" Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.723431 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-db-create-fj4kz" podStartSLOduration=1.723411271 podStartE2EDuration="1.723411271s" podCreationTimestamp="2026-01-28 15:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:13.719218016 +0000 UTC m=+1191.492833044" watchObservedRunningTime="2026-01-28 15:21:13.723411271 +0000 UTC m=+1191.497026299" Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.739126 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-db-create-kjd29" podStartSLOduration=1.739106539 podStartE2EDuration="1.739106539s" podCreationTimestamp="2026-01-28 15:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:13.739035077 +0000 UTC m=+1191.512650105" watchObservedRunningTime="2026-01-28 15:21:13.739106539 +0000 UTC m=+1191.512721577" Jan 28 15:21:13 crc kubenswrapper[4893]: I0128 15:21:13.764116 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/memcached-0" podStartSLOduration=2.49953108 podStartE2EDuration="31.76409262s" podCreationTimestamp="2026-01-28 15:20:42 +0000 UTC" firstStartedPulling="2026-01-28 15:20:43.270664765 +0000 UTC m=+1161.044279793" lastFinishedPulling="2026-01-28 15:21:12.535226305 +0000 UTC m=+1190.308841333" observedRunningTime="2026-01-28 15:21:13.758360574 +0000 UTC m=+1191.531975622" watchObservedRunningTime="2026-01-28 15:21:13.76409262 +0000 UTC m=+1191.537707648" Jan 28 15:21:14 crc kubenswrapper[4893]: I0128 15:21:14.666665 4893 generic.go:334] "Generic (PLEG): container finished" podID="663785fa-d819-4227-a09f-0a7d2b72e7fe" containerID="2e5a86aea1d478fc50596210b63ecfa9b4f3ea61a714dafed280d35645141fc2" exitCode=0 Jan 28 15:21:14 crc kubenswrapper[4893]: I0128 15:21:14.666785 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" event={"ID":"663785fa-d819-4227-a09f-0a7d2b72e7fe","Type":"ContainerDied","Data":"2e5a86aea1d478fc50596210b63ecfa9b4f3ea61a714dafed280d35645141fc2"} Jan 28 15:21:14 crc kubenswrapper[4893]: I0128 15:21:14.669118 4893 generic.go:334] "Generic (PLEG): container finished" podID="491fbaf4-ae4b-42f4-a505-70d34407e7ef" containerID="825c2f1a27ab9e2362363d6606344f75e84fe2c64f7054a93d740a2bf72c9873" exitCode=0 Jan 28 15:21:14 crc kubenswrapper[4893]: I0128 15:21:14.669208 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-kjd29" event={"ID":"491fbaf4-ae4b-42f4-a505-70d34407e7ef","Type":"ContainerDied","Data":"825c2f1a27ab9e2362363d6606344f75e84fe2c64f7054a93d740a2bf72c9873"} Jan 28 15:21:14 crc kubenswrapper[4893]: I0128 15:21:14.670886 4893 generic.go:334] "Generic (PLEG): container finished" podID="4724e828-4305-4fdc-9bec-0af263e7eed9" containerID="102f86e791e595c41921982d56f27af9e40d62121b68af2de2f1b5ced550ef25" exitCode=0 Jan 28 15:21:14 crc kubenswrapper[4893]: I0128 15:21:14.670948 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" event={"ID":"4724e828-4305-4fdc-9bec-0af263e7eed9","Type":"ContainerDied","Data":"102f86e791e595c41921982d56f27af9e40d62121b68af2de2f1b5ced550ef25"} Jan 28 15:21:14 crc kubenswrapper[4893]: I0128 15:21:14.674119 4893 generic.go:334] "Generic (PLEG): container finished" podID="ca2bcba5-5853-4f12-8bde-522e186d1839" containerID="ca718ddd9e8cb38fb4d1ad7c50570e51c05f3d9f7d4934b889b7ff676520fb34" exitCode=0 Jan 28 15:21:14 crc kubenswrapper[4893]: I0128 15:21:14.674221 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-fj4kz" event={"ID":"ca2bcba5-5853-4f12-8bde-522e186d1839","Type":"ContainerDied","Data":"ca718ddd9e8cb38fb4d1ad7c50570e51c05f3d9f7d4934b889b7ff676520fb34"} Jan 28 15:21:14 crc kubenswrapper[4893]: I0128 15:21:14.980684 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-92st2" Jan 28 15:21:15 crc kubenswrapper[4893]: I0128 15:21:15.145732 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72ed9ea9-0765-446e-898c-36cddb63c725-operator-scripts\") pod \"72ed9ea9-0765-446e-898c-36cddb63c725\" (UID: \"72ed9ea9-0765-446e-898c-36cddb63c725\") " Jan 28 15:21:15 crc kubenswrapper[4893]: I0128 15:21:15.145856 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k442r\" (UniqueName: \"kubernetes.io/projected/72ed9ea9-0765-446e-898c-36cddb63c725-kube-api-access-k442r\") pod \"72ed9ea9-0765-446e-898c-36cddb63c725\" (UID: \"72ed9ea9-0765-446e-898c-36cddb63c725\") " Jan 28 15:21:15 crc kubenswrapper[4893]: I0128 15:21:15.146203 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72ed9ea9-0765-446e-898c-36cddb63c725-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72ed9ea9-0765-446e-898c-36cddb63c725" (UID: "72ed9ea9-0765-446e-898c-36cddb63c725"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:15 crc kubenswrapper[4893]: I0128 15:21:15.146367 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72ed9ea9-0765-446e-898c-36cddb63c725-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:15 crc kubenswrapper[4893]: I0128 15:21:15.158961 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72ed9ea9-0765-446e-898c-36cddb63c725-kube-api-access-k442r" (OuterVolumeSpecName: "kube-api-access-k442r") pod "72ed9ea9-0765-446e-898c-36cddb63c725" (UID: "72ed9ea9-0765-446e-898c-36cddb63c725"). InnerVolumeSpecName "kube-api-access-k442r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:15 crc kubenswrapper[4893]: I0128 15:21:15.247380 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k442r\" (UniqueName: \"kubernetes.io/projected/72ed9ea9-0765-446e-898c-36cddb63c725-kube-api-access-k442r\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:15 crc kubenswrapper[4893]: I0128 15:21:15.683209 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-92st2" event={"ID":"72ed9ea9-0765-446e-898c-36cddb63c725","Type":"ContainerDied","Data":"812e4e3c316f69672dad47256e2b9218cf6f60895770387a425d1286e30a190f"} Jan 28 15:21:15 crc kubenswrapper[4893]: I0128 15:21:15.683259 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="812e4e3c316f69672dad47256e2b9218cf6f60895770387a425d1286e30a190f" Jan 28 15:21:15 crc kubenswrapper[4893]: I0128 15:21:15.683407 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-92st2" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.085012 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.168619 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-fj4kz" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.176866 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-kjd29" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.187171 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.263302 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dnpj\" (UniqueName: \"kubernetes.io/projected/ca2bcba5-5853-4f12-8bde-522e186d1839-kube-api-access-4dnpj\") pod \"ca2bcba5-5853-4f12-8bde-522e186d1839\" (UID: \"ca2bcba5-5853-4f12-8bde-522e186d1839\") " Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.263361 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt7jg\" (UniqueName: \"kubernetes.io/projected/663785fa-d819-4227-a09f-0a7d2b72e7fe-kube-api-access-wt7jg\") pod \"663785fa-d819-4227-a09f-0a7d2b72e7fe\" (UID: \"663785fa-d819-4227-a09f-0a7d2b72e7fe\") " Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.263427 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2bcba5-5853-4f12-8bde-522e186d1839-operator-scripts\") pod \"ca2bcba5-5853-4f12-8bde-522e186d1839\" (UID: \"ca2bcba5-5853-4f12-8bde-522e186d1839\") " Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.263643 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663785fa-d819-4227-a09f-0a7d2b72e7fe-operator-scripts\") pod \"663785fa-d819-4227-a09f-0a7d2b72e7fe\" (UID: \"663785fa-d819-4227-a09f-0a7d2b72e7fe\") " Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.264406 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/663785fa-d819-4227-a09f-0a7d2b72e7fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "663785fa-d819-4227-a09f-0a7d2b72e7fe" (UID: "663785fa-d819-4227-a09f-0a7d2b72e7fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.265658 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2bcba5-5853-4f12-8bde-522e186d1839-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca2bcba5-5853-4f12-8bde-522e186d1839" (UID: "ca2bcba5-5853-4f12-8bde-522e186d1839"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.267791 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2bcba5-5853-4f12-8bde-522e186d1839-kube-api-access-4dnpj" (OuterVolumeSpecName: "kube-api-access-4dnpj") pod "ca2bcba5-5853-4f12-8bde-522e186d1839" (UID: "ca2bcba5-5853-4f12-8bde-522e186d1839"). InnerVolumeSpecName "kube-api-access-4dnpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.268698 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/663785fa-d819-4227-a09f-0a7d2b72e7fe-kube-api-access-wt7jg" (OuterVolumeSpecName: "kube-api-access-wt7jg") pod "663785fa-d819-4227-a09f-0a7d2b72e7fe" (UID: "663785fa-d819-4227-a09f-0a7d2b72e7fe"). InnerVolumeSpecName "kube-api-access-wt7jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.364995 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491fbaf4-ae4b-42f4-a505-70d34407e7ef-operator-scripts\") pod \"491fbaf4-ae4b-42f4-a505-70d34407e7ef\" (UID: \"491fbaf4-ae4b-42f4-a505-70d34407e7ef\") " Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.365054 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4724e828-4305-4fdc-9bec-0af263e7eed9-operator-scripts\") pod \"4724e828-4305-4fdc-9bec-0af263e7eed9\" (UID: \"4724e828-4305-4fdc-9bec-0af263e7eed9\") " Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.365072 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bjgz\" (UniqueName: \"kubernetes.io/projected/491fbaf4-ae4b-42f4-a505-70d34407e7ef-kube-api-access-6bjgz\") pod \"491fbaf4-ae4b-42f4-a505-70d34407e7ef\" (UID: \"491fbaf4-ae4b-42f4-a505-70d34407e7ef\") " Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.365117 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nkjh\" (UniqueName: \"kubernetes.io/projected/4724e828-4305-4fdc-9bec-0af263e7eed9-kube-api-access-9nkjh\") pod \"4724e828-4305-4fdc-9bec-0af263e7eed9\" (UID: \"4724e828-4305-4fdc-9bec-0af263e7eed9\") " Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.365448 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dnpj\" (UniqueName: \"kubernetes.io/projected/ca2bcba5-5853-4f12-8bde-522e186d1839-kube-api-access-4dnpj\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.365456 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/491fbaf4-ae4b-42f4-a505-70d34407e7ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "491fbaf4-ae4b-42f4-a505-70d34407e7ef" (UID: "491fbaf4-ae4b-42f4-a505-70d34407e7ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.365464 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt7jg\" (UniqueName: \"kubernetes.io/projected/663785fa-d819-4227-a09f-0a7d2b72e7fe-kube-api-access-wt7jg\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.365547 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2bcba5-5853-4f12-8bde-522e186d1839-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.365558 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/663785fa-d819-4227-a09f-0a7d2b72e7fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.365648 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4724e828-4305-4fdc-9bec-0af263e7eed9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4724e828-4305-4fdc-9bec-0af263e7eed9" (UID: "4724e828-4305-4fdc-9bec-0af263e7eed9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.368025 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/491fbaf4-ae4b-42f4-a505-70d34407e7ef-kube-api-access-6bjgz" (OuterVolumeSpecName: "kube-api-access-6bjgz") pod "491fbaf4-ae4b-42f4-a505-70d34407e7ef" (UID: "491fbaf4-ae4b-42f4-a505-70d34407e7ef"). InnerVolumeSpecName "kube-api-access-6bjgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.368119 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4724e828-4305-4fdc-9bec-0af263e7eed9-kube-api-access-9nkjh" (OuterVolumeSpecName: "kube-api-access-9nkjh") pod "4724e828-4305-4fdc-9bec-0af263e7eed9" (UID: "4724e828-4305-4fdc-9bec-0af263e7eed9"). InnerVolumeSpecName "kube-api-access-9nkjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.467205 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/491fbaf4-ae4b-42f4-a505-70d34407e7ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.467242 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4724e828-4305-4fdc-9bec-0af263e7eed9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.467253 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bjgz\" (UniqueName: \"kubernetes.io/projected/491fbaf4-ae4b-42f4-a505-70d34407e7ef-kube-api-access-6bjgz\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.467269 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nkjh\" (UniqueName: \"kubernetes.io/projected/4724e828-4305-4fdc-9bec-0af263e7eed9-kube-api-access-9nkjh\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.691614 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-fj4kz" event={"ID":"ca2bcba5-5853-4f12-8bde-522e186d1839","Type":"ContainerDied","Data":"7c4821b81034d753ef3622f4d64865d5eff7dbd7d31694e90b67f299b16cb4bf"} Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.691670 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-fj4kz" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.691684 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c4821b81034d753ef3622f4d64865d5eff7dbd7d31694e90b67f299b16cb4bf" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.693377 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.693364 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-fbc8-account-create-update-5vsn8" event={"ID":"663785fa-d819-4227-a09f-0a7d2b72e7fe","Type":"ContainerDied","Data":"7319ff430d200501a444ff55d94797b97694e3636687b1d35c6442d99fb4ab30"} Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.693446 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7319ff430d200501a444ff55d94797b97694e3636687b1d35c6442d99fb4ab30" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.695892 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-kjd29" event={"ID":"491fbaf4-ae4b-42f4-a505-70d34407e7ef","Type":"ContainerDied","Data":"746055bea3df6370099ec859b72efd192f2eb31cc60991befbc160290af19f8c"} Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.696129 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="746055bea3df6370099ec859b72efd192f2eb31cc60991befbc160290af19f8c" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.695973 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-kjd29" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.697399 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" event={"ID":"4724e828-4305-4fdc-9bec-0af263e7eed9","Type":"ContainerDied","Data":"eb13acb286cfae79cf30b4b32785aa67ae813addb6bc100e3c6418a0f1abf4b4"} Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.697440 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb13acb286cfae79cf30b4b32785aa67ae813addb6bc100e3c6418a0f1abf4b4" Jan 28 15:21:16 crc kubenswrapper[4893]: I0128 15:21:16.697455 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bf54-account-create-update-zxqbm" Jan 28 15:21:17 crc kubenswrapper[4893]: I0128 15:21:17.638648 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-92st2"] Jan 28 15:21:17 crc kubenswrapper[4893]: I0128 15:21:17.644543 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-92st2"] Jan 28 15:21:17 crc kubenswrapper[4893]: I0128 15:21:17.759298 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/memcached-0" Jan 28 15:21:18 crc kubenswrapper[4893]: I0128 15:21:18.901193 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72ed9ea9-0765-446e-898c-36cddb63c725" path="/var/lib/kubelet/pods/72ed9ea9-0765-446e-898c-36cddb63c725/volumes" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.644702 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-qxs4k"] Jan 28 15:21:22 crc kubenswrapper[4893]: E0128 15:21:22.645302 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="491fbaf4-ae4b-42f4-a505-70d34407e7ef" containerName="mariadb-database-create" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.645316 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="491fbaf4-ae4b-42f4-a505-70d34407e7ef" containerName="mariadb-database-create" Jan 28 15:21:22 crc kubenswrapper[4893]: E0128 15:21:22.645340 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="663785fa-d819-4227-a09f-0a7d2b72e7fe" containerName="mariadb-account-create-update" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.645346 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="663785fa-d819-4227-a09f-0a7d2b72e7fe" containerName="mariadb-account-create-update" Jan 28 15:21:22 crc kubenswrapper[4893]: E0128 15:21:22.645357 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4724e828-4305-4fdc-9bec-0af263e7eed9" containerName="mariadb-account-create-update" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.645366 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4724e828-4305-4fdc-9bec-0af263e7eed9" containerName="mariadb-account-create-update" Jan 28 15:21:22 crc kubenswrapper[4893]: E0128 15:21:22.645387 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ed9ea9-0765-446e-898c-36cddb63c725" containerName="mariadb-account-create-update" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.645396 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ed9ea9-0765-446e-898c-36cddb63c725" containerName="mariadb-account-create-update" Jan 28 15:21:22 crc kubenswrapper[4893]: E0128 15:21:22.645408 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2bcba5-5853-4f12-8bde-522e186d1839" containerName="mariadb-database-create" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.645417 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2bcba5-5853-4f12-8bde-522e186d1839" containerName="mariadb-database-create" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.645599 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="663785fa-d819-4227-a09f-0a7d2b72e7fe" containerName="mariadb-account-create-update" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.645619 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4724e828-4305-4fdc-9bec-0af263e7eed9" containerName="mariadb-account-create-update" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.645634 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2bcba5-5853-4f12-8bde-522e186d1839" containerName="mariadb-database-create" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.645642 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="491fbaf4-ae4b-42f4-a505-70d34407e7ef" containerName="mariadb-database-create" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.645651 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ed9ea9-0765-446e-898c-36cddb63c725" containerName="mariadb-account-create-update" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.646140 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-qxs4k" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.649809 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-cell1-mariadb-root-db-secret" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.652997 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-qxs4k"] Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.759212 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drctt\" (UniqueName: \"kubernetes.io/projected/423c812c-5dbb-4719-9b76-c782c05ef6f2-kube-api-access-drctt\") pod \"root-account-create-update-qxs4k\" (UID: \"423c812c-5dbb-4719-9b76-c782c05ef6f2\") " pod="nova-kuttl-default/root-account-create-update-qxs4k" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.759492 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/423c812c-5dbb-4719-9b76-c782c05ef6f2-operator-scripts\") pod \"root-account-create-update-qxs4k\" (UID: \"423c812c-5dbb-4719-9b76-c782c05ef6f2\") " pod="nova-kuttl-default/root-account-create-update-qxs4k" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.860772 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drctt\" (UniqueName: \"kubernetes.io/projected/423c812c-5dbb-4719-9b76-c782c05ef6f2-kube-api-access-drctt\") pod \"root-account-create-update-qxs4k\" (UID: \"423c812c-5dbb-4719-9b76-c782c05ef6f2\") " pod="nova-kuttl-default/root-account-create-update-qxs4k" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.860852 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/423c812c-5dbb-4719-9b76-c782c05ef6f2-operator-scripts\") pod \"root-account-create-update-qxs4k\" (UID: \"423c812c-5dbb-4719-9b76-c782c05ef6f2\") " pod="nova-kuttl-default/root-account-create-update-qxs4k" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.861744 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/423c812c-5dbb-4719-9b76-c782c05ef6f2-operator-scripts\") pod \"root-account-create-update-qxs4k\" (UID: \"423c812c-5dbb-4719-9b76-c782c05ef6f2\") " pod="nova-kuttl-default/root-account-create-update-qxs4k" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.882031 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drctt\" (UniqueName: \"kubernetes.io/projected/423c812c-5dbb-4719-9b76-c782c05ef6f2-kube-api-access-drctt\") pod \"root-account-create-update-qxs4k\" (UID: \"423c812c-5dbb-4719-9b76-c782c05ef6f2\") " pod="nova-kuttl-default/root-account-create-update-qxs4k" Jan 28 15:21:22 crc kubenswrapper[4893]: I0128 15:21:22.964146 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-qxs4k" Jan 28 15:21:23 crc kubenswrapper[4893]: I0128 15:21:23.401649 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-qxs4k"] Jan 28 15:21:23 crc kubenswrapper[4893]: W0128 15:21:23.407309 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod423c812c_5dbb_4719_9b76_c782c05ef6f2.slice/crio-a1367f114499b184d30e7f3ff475ac719452cc8529fd9673a08d43956bca9af2 WatchSource:0}: Error finding container a1367f114499b184d30e7f3ff475ac719452cc8529fd9673a08d43956bca9af2: Status 404 returned error can't find the container with id a1367f114499b184d30e7f3ff475ac719452cc8529fd9673a08d43956bca9af2 Jan 28 15:21:23 crc kubenswrapper[4893]: I0128 15:21:23.411932 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-cell1-mariadb-root-db-secret" Jan 28 15:21:23 crc kubenswrapper[4893]: I0128 15:21:23.745446 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-qxs4k" event={"ID":"423c812c-5dbb-4719-9b76-c782c05ef6f2","Type":"ContainerStarted","Data":"92bbf08e3e22b539dba9f2b586ccb1a7c7e9c9a08c3307ac3926134d8c83c4a1"} Jan 28 15:21:23 crc kubenswrapper[4893]: I0128 15:21:23.745544 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-qxs4k" event={"ID":"423c812c-5dbb-4719-9b76-c782c05ef6f2","Type":"ContainerStarted","Data":"a1367f114499b184d30e7f3ff475ac719452cc8529fd9673a08d43956bca9af2"} Jan 28 15:21:23 crc kubenswrapper[4893]: I0128 15:21:23.767952 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/root-account-create-update-qxs4k" podStartSLOduration=1.767932243 podStartE2EDuration="1.767932243s" podCreationTimestamp="2026-01-28 15:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:23.758537787 +0000 UTC m=+1201.532152815" watchObservedRunningTime="2026-01-28 15:21:23.767932243 +0000 UTC m=+1201.541547271" Jan 28 15:21:24 crc kubenswrapper[4893]: I0128 15:21:24.756203 4893 generic.go:334] "Generic (PLEG): container finished" podID="423c812c-5dbb-4719-9b76-c782c05ef6f2" containerID="92bbf08e3e22b539dba9f2b586ccb1a7c7e9c9a08c3307ac3926134d8c83c4a1" exitCode=0 Jan 28 15:21:24 crc kubenswrapper[4893]: I0128 15:21:24.756273 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-qxs4k" event={"ID":"423c812c-5dbb-4719-9b76-c782c05ef6f2","Type":"ContainerDied","Data":"92bbf08e3e22b539dba9f2b586ccb1a7c7e9c9a08c3307ac3926134d8c83c4a1"} Jan 28 15:21:26 crc kubenswrapper[4893]: I0128 15:21:26.046760 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-qxs4k" Jan 28 15:21:26 crc kubenswrapper[4893]: I0128 15:21:26.224582 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/423c812c-5dbb-4719-9b76-c782c05ef6f2-operator-scripts\") pod \"423c812c-5dbb-4719-9b76-c782c05ef6f2\" (UID: \"423c812c-5dbb-4719-9b76-c782c05ef6f2\") " Jan 28 15:21:26 crc kubenswrapper[4893]: I0128 15:21:26.224712 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drctt\" (UniqueName: \"kubernetes.io/projected/423c812c-5dbb-4719-9b76-c782c05ef6f2-kube-api-access-drctt\") pod \"423c812c-5dbb-4719-9b76-c782c05ef6f2\" (UID: \"423c812c-5dbb-4719-9b76-c782c05ef6f2\") " Jan 28 15:21:26 crc kubenswrapper[4893]: I0128 15:21:26.225643 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/423c812c-5dbb-4719-9b76-c782c05ef6f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "423c812c-5dbb-4719-9b76-c782c05ef6f2" (UID: "423c812c-5dbb-4719-9b76-c782c05ef6f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:26 crc kubenswrapper[4893]: I0128 15:21:26.231369 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423c812c-5dbb-4719-9b76-c782c05ef6f2-kube-api-access-drctt" (OuterVolumeSpecName: "kube-api-access-drctt") pod "423c812c-5dbb-4719-9b76-c782c05ef6f2" (UID: "423c812c-5dbb-4719-9b76-c782c05ef6f2"). InnerVolumeSpecName "kube-api-access-drctt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:26 crc kubenswrapper[4893]: I0128 15:21:26.326672 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/423c812c-5dbb-4719-9b76-c782c05ef6f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:26 crc kubenswrapper[4893]: I0128 15:21:26.326719 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drctt\" (UniqueName: \"kubernetes.io/projected/423c812c-5dbb-4719-9b76-c782c05ef6f2-kube-api-access-drctt\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:26 crc kubenswrapper[4893]: I0128 15:21:26.779143 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-qxs4k" event={"ID":"423c812c-5dbb-4719-9b76-c782c05ef6f2","Type":"ContainerDied","Data":"a1367f114499b184d30e7f3ff475ac719452cc8529fd9673a08d43956bca9af2"} Jan 28 15:21:26 crc kubenswrapper[4893]: I0128 15:21:26.779192 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1367f114499b184d30e7f3ff475ac719452cc8529fd9673a08d43956bca9af2" Jan 28 15:21:26 crc kubenswrapper[4893]: I0128 15:21:26.779211 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-qxs4k" Jan 28 15:21:30 crc kubenswrapper[4893]: I0128 15:21:30.816462 4893 generic.go:334] "Generic (PLEG): container finished" podID="67b2b466-ebc4-41d8-8b96-a285eb0609f5" containerID="9e462ce55f992a4384897da48dc3170a1e646efc266afb19bf972aeb6bf0e6a3" exitCode=0 Jan 28 15:21:30 crc kubenswrapper[4893]: I0128 15:21:30.816538 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"67b2b466-ebc4-41d8-8b96-a285eb0609f5","Type":"ContainerDied","Data":"9e462ce55f992a4384897da48dc3170a1e646efc266afb19bf972aeb6bf0e6a3"} Jan 28 15:21:30 crc kubenswrapper[4893]: I0128 15:21:30.819687 4893 generic.go:334] "Generic (PLEG): container finished" podID="dcd1c126-70b7-46e1-8226-bc7dc353ecdb" containerID="ee4f2aaec90a2a7044570546afb4c51006663e7cd5a35772f2f2951f41b45455" exitCode=0 Jan 28 15:21:30 crc kubenswrapper[4893]: I0128 15:21:30.819743 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"dcd1c126-70b7-46e1-8226-bc7dc353ecdb","Type":"ContainerDied","Data":"ee4f2aaec90a2a7044570546afb4c51006663e7cd5a35772f2f2951f41b45455"} Jan 28 15:21:31 crc kubenswrapper[4893]: I0128 15:21:31.828190 4893 generic.go:334] "Generic (PLEG): container finished" podID="01a81616-675d-43ec-acb2-7a4541b96771" containerID="904d17f4f3ea3ab55ddc8e3b2bb0d894ddd320674fb25d3d6bf38d29f17cca6c" exitCode=0 Jan 28 15:21:31 crc kubenswrapper[4893]: I0128 15:21:31.828411 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"01a81616-675d-43ec-acb2-7a4541b96771","Type":"ContainerDied","Data":"904d17f4f3ea3ab55ddc8e3b2bb0d894ddd320674fb25d3d6bf38d29f17cca6c"} Jan 28 15:21:31 crc kubenswrapper[4893]: I0128 15:21:31.838585 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"67b2b466-ebc4-41d8-8b96-a285eb0609f5","Type":"ContainerStarted","Data":"9dfb07d207d3106e999c7322a951499c1ea74a3165a1e7d00ed81c70347ed975"} Jan 28 15:21:31 crc kubenswrapper[4893]: I0128 15:21:31.839636 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:21:31 crc kubenswrapper[4893]: I0128 15:21:31.842140 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"dcd1c126-70b7-46e1-8226-bc7dc353ecdb","Type":"ContainerStarted","Data":"ba8d8287a7ff2996f0236ff10eea92df513984baf627e446b3de4c4562a76c9d"} Jan 28 15:21:31 crc kubenswrapper[4893]: I0128 15:21:31.842615 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:21:31 crc kubenswrapper[4893]: I0128 15:21:31.879887 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podStartSLOduration=36.576191305 podStartE2EDuration="51.879869968s" podCreationTimestamp="2026-01-28 15:20:40 +0000 UTC" firstStartedPulling="2026-01-28 15:20:41.96236391 +0000 UTC m=+1159.735978938" lastFinishedPulling="2026-01-28 15:20:57.266042583 +0000 UTC m=+1175.039657601" observedRunningTime="2026-01-28 15:21:31.877407981 +0000 UTC m=+1209.651023019" watchObservedRunningTime="2026-01-28 15:21:31.879869968 +0000 UTC m=+1209.653484996" Jan 28 15:21:31 crc kubenswrapper[4893]: I0128 15:21:31.911194 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podStartSLOduration=37.007809807 podStartE2EDuration="51.911171512s" podCreationTimestamp="2026-01-28 15:20:40 +0000 UTC" firstStartedPulling="2026-01-28 15:20:42.374105509 +0000 UTC m=+1160.147720537" lastFinishedPulling="2026-01-28 15:20:57.277467204 +0000 UTC m=+1175.051082242" observedRunningTime="2026-01-28 15:21:31.903330378 +0000 UTC m=+1209.676945406" watchObservedRunningTime="2026-01-28 15:21:31.911171512 +0000 UTC m=+1209.684786540" Jan 28 15:21:32 crc kubenswrapper[4893]: I0128 15:21:32.852003 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"01a81616-675d-43ec-acb2-7a4541b96771","Type":"ContainerStarted","Data":"25d1f1e564a231a738a2f91f58dccc493b446047d734e42a591ce13971694bc7"} Jan 28 15:21:32 crc kubenswrapper[4893]: I0128 15:21:32.853167 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:21:32 crc kubenswrapper[4893]: I0128 15:21:32.879422 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-server-0" podStartSLOduration=38.673691205 podStartE2EDuration="53.879402399s" podCreationTimestamp="2026-01-28 15:20:39 +0000 UTC" firstStartedPulling="2026-01-28 15:20:42.057982195 +0000 UTC m=+1159.831597223" lastFinishedPulling="2026-01-28 15:20:57.263693389 +0000 UTC m=+1175.037308417" observedRunningTime="2026-01-28 15:21:32.876455048 +0000 UTC m=+1210.650070076" watchObservedRunningTime="2026-01-28 15:21:32.879402399 +0000 UTC m=+1210.653017427" Jan 28 15:21:35 crc kubenswrapper[4893]: I0128 15:21:35.722702 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:21:35 crc kubenswrapper[4893]: I0128 15:21:35.723335 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:21:41 crc kubenswrapper[4893]: I0128 15:21:41.646772 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 28 15:21:41 crc kubenswrapper[4893]: I0128 15:21:41.882419 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 28 15:21:51 crc kubenswrapper[4893]: I0128 15:21:51.562676 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-server-0" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.133847 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-sync-nj5vq"] Jan 28 15:21:52 crc kubenswrapper[4893]: E0128 15:21:52.134203 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423c812c-5dbb-4719-9b76-c782c05ef6f2" containerName="mariadb-account-create-update" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.134222 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="423c812c-5dbb-4719-9b76-c782c05ef6f2" containerName="mariadb-account-create-update" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.134379 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="423c812c-5dbb-4719-9b76-c782c05ef6f2" containerName="mariadb-account-create-update" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.134873 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-nj5vq" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.136607 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-nqg9h" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.137309 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.137530 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.144809 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.147610 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-nj5vq"] Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.286315 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b91b788-69c1-4fc5-8a75-7a32476dcd02-config-data\") pod \"keystone-db-sync-nj5vq\" (UID: \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\") " pod="nova-kuttl-default/keystone-db-sync-nj5vq" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.286440 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b91b788-69c1-4fc5-8a75-7a32476dcd02-combined-ca-bundle\") pod \"keystone-db-sync-nj5vq\" (UID: \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\") " pod="nova-kuttl-default/keystone-db-sync-nj5vq" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.286568 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxf2l\" (UniqueName: \"kubernetes.io/projected/0b91b788-69c1-4fc5-8a75-7a32476dcd02-kube-api-access-vxf2l\") pod \"keystone-db-sync-nj5vq\" (UID: \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\") " pod="nova-kuttl-default/keystone-db-sync-nj5vq" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.388065 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b91b788-69c1-4fc5-8a75-7a32476dcd02-config-data\") pod \"keystone-db-sync-nj5vq\" (UID: \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\") " pod="nova-kuttl-default/keystone-db-sync-nj5vq" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.388141 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b91b788-69c1-4fc5-8a75-7a32476dcd02-combined-ca-bundle\") pod \"keystone-db-sync-nj5vq\" (UID: \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\") " pod="nova-kuttl-default/keystone-db-sync-nj5vq" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.388212 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxf2l\" (UniqueName: \"kubernetes.io/projected/0b91b788-69c1-4fc5-8a75-7a32476dcd02-kube-api-access-vxf2l\") pod \"keystone-db-sync-nj5vq\" (UID: \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\") " pod="nova-kuttl-default/keystone-db-sync-nj5vq" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.395997 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b91b788-69c1-4fc5-8a75-7a32476dcd02-config-data\") pod \"keystone-db-sync-nj5vq\" (UID: \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\") " pod="nova-kuttl-default/keystone-db-sync-nj5vq" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.396026 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b91b788-69c1-4fc5-8a75-7a32476dcd02-combined-ca-bundle\") pod \"keystone-db-sync-nj5vq\" (UID: \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\") " pod="nova-kuttl-default/keystone-db-sync-nj5vq" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.408929 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxf2l\" (UniqueName: \"kubernetes.io/projected/0b91b788-69c1-4fc5-8a75-7a32476dcd02-kube-api-access-vxf2l\") pod \"keystone-db-sync-nj5vq\" (UID: \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\") " pod="nova-kuttl-default/keystone-db-sync-nj5vq" Jan 28 15:21:52 crc kubenswrapper[4893]: I0128 15:21:52.455989 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-nj5vq" Jan 28 15:21:53 crc kubenswrapper[4893]: I0128 15:21:53.111791 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-nj5vq"] Jan 28 15:21:54 crc kubenswrapper[4893]: I0128 15:21:54.014627 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-nj5vq" event={"ID":"0b91b788-69c1-4fc5-8a75-7a32476dcd02","Type":"ContainerStarted","Data":"64840c35d9aa929583e059aa0324d947c7a94211ea028358c89b7a45cbdbf9f8"} Jan 28 15:22:01 crc kubenswrapper[4893]: I0128 15:22:01.069721 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-nj5vq" event={"ID":"0b91b788-69c1-4fc5-8a75-7a32476dcd02","Type":"ContainerStarted","Data":"92be6fde6a54fcad0a44b8eacc66f7d3fd5537886a3a56d69f335cd2d9615fbf"} Jan 28 15:22:01 crc kubenswrapper[4893]: I0128 15:22:01.087599 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-db-sync-nj5vq" podStartSLOduration=1.6588215800000001 podStartE2EDuration="9.087583893s" podCreationTimestamp="2026-01-28 15:21:52 +0000 UTC" firstStartedPulling="2026-01-28 15:21:53.120038876 +0000 UTC m=+1230.893653904" lastFinishedPulling="2026-01-28 15:22:00.548801189 +0000 UTC m=+1238.322416217" observedRunningTime="2026-01-28 15:22:01.086446563 +0000 UTC m=+1238.860061591" watchObservedRunningTime="2026-01-28 15:22:01.087583893 +0000 UTC m=+1238.861198921" Jan 28 15:22:04 crc kubenswrapper[4893]: I0128 15:22:04.096455 4893 generic.go:334] "Generic (PLEG): container finished" podID="0b91b788-69c1-4fc5-8a75-7a32476dcd02" containerID="92be6fde6a54fcad0a44b8eacc66f7d3fd5537886a3a56d69f335cd2d9615fbf" exitCode=0 Jan 28 15:22:04 crc kubenswrapper[4893]: I0128 15:22:04.096507 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-nj5vq" event={"ID":"0b91b788-69c1-4fc5-8a75-7a32476dcd02","Type":"ContainerDied","Data":"92be6fde6a54fcad0a44b8eacc66f7d3fd5537886a3a56d69f335cd2d9615fbf"} Jan 28 15:22:05 crc kubenswrapper[4893]: I0128 15:22:05.439569 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-nj5vq" Jan 28 15:22:05 crc kubenswrapper[4893]: I0128 15:22:05.617816 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxf2l\" (UniqueName: \"kubernetes.io/projected/0b91b788-69c1-4fc5-8a75-7a32476dcd02-kube-api-access-vxf2l\") pod \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\" (UID: \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\") " Jan 28 15:22:05 crc kubenswrapper[4893]: I0128 15:22:05.617949 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b91b788-69c1-4fc5-8a75-7a32476dcd02-combined-ca-bundle\") pod \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\" (UID: \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\") " Jan 28 15:22:05 crc kubenswrapper[4893]: I0128 15:22:05.618050 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b91b788-69c1-4fc5-8a75-7a32476dcd02-config-data\") pod \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\" (UID: \"0b91b788-69c1-4fc5-8a75-7a32476dcd02\") " Jan 28 15:22:05 crc kubenswrapper[4893]: I0128 15:22:05.626448 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b91b788-69c1-4fc5-8a75-7a32476dcd02-kube-api-access-vxf2l" (OuterVolumeSpecName: "kube-api-access-vxf2l") pod "0b91b788-69c1-4fc5-8a75-7a32476dcd02" (UID: "0b91b788-69c1-4fc5-8a75-7a32476dcd02"). InnerVolumeSpecName "kube-api-access-vxf2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:05 crc kubenswrapper[4893]: I0128 15:22:05.643471 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b91b788-69c1-4fc5-8a75-7a32476dcd02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b91b788-69c1-4fc5-8a75-7a32476dcd02" (UID: "0b91b788-69c1-4fc5-8a75-7a32476dcd02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:05 crc kubenswrapper[4893]: I0128 15:22:05.664901 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b91b788-69c1-4fc5-8a75-7a32476dcd02-config-data" (OuterVolumeSpecName: "config-data") pod "0b91b788-69c1-4fc5-8a75-7a32476dcd02" (UID: "0b91b788-69c1-4fc5-8a75-7a32476dcd02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:05 crc kubenswrapper[4893]: I0128 15:22:05.720683 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxf2l\" (UniqueName: \"kubernetes.io/projected/0b91b788-69c1-4fc5-8a75-7a32476dcd02-kube-api-access-vxf2l\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:05 crc kubenswrapper[4893]: I0128 15:22:05.720728 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b91b788-69c1-4fc5-8a75-7a32476dcd02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:05 crc kubenswrapper[4893]: I0128 15:22:05.720738 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b91b788-69c1-4fc5-8a75-7a32476dcd02-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:05 crc kubenswrapper[4893]: I0128 15:22:05.722291 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:22:05 crc kubenswrapper[4893]: I0128 15:22:05.722360 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.116115 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-nj5vq" event={"ID":"0b91b788-69c1-4fc5-8a75-7a32476dcd02","Type":"ContainerDied","Data":"64840c35d9aa929583e059aa0324d947c7a94211ea028358c89b7a45cbdbf9f8"} Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.116163 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-nj5vq" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.116193 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64840c35d9aa929583e059aa0324d947c7a94211ea028358c89b7a45cbdbf9f8" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.339610 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-2slc5"] Jan 28 15:22:06 crc kubenswrapper[4893]: E0128 15:22:06.340572 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b91b788-69c1-4fc5-8a75-7a32476dcd02" containerName="keystone-db-sync" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.340595 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b91b788-69c1-4fc5-8a75-7a32476dcd02" containerName="keystone-db-sync" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.340770 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b91b788-69c1-4fc5-8a75-7a32476dcd02" containerName="keystone-db-sync" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.341508 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.345805 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.347161 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.347179 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.347434 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.347528 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-nqg9h" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.357838 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-2slc5"] Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.436049 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-credential-keys\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.436094 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-combined-ca-bundle\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.436219 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8d59\" (UniqueName: \"kubernetes.io/projected/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-kube-api-access-x8d59\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.436256 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-scripts\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.436290 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-fernet-keys\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.436359 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-config-data\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.537804 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8d59\" (UniqueName: \"kubernetes.io/projected/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-kube-api-access-x8d59\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.537848 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-scripts\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.537885 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-fernet-keys\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.537920 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-config-data\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.537952 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-credential-keys\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.538087 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-combined-ca-bundle\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.542267 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-fernet-keys\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.542298 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-config-data\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.543220 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-credential-keys\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.547057 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-combined-ca-bundle\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.547280 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-scripts\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.587432 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8d59\" (UniqueName: \"kubernetes.io/projected/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-kube-api-access-x8d59\") pod \"keystone-bootstrap-2slc5\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.660637 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.704783 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-sync-8rs6s"] Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.707973 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.714574 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.714873 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.715004 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-jxv75" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.727128 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-8rs6s"] Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.848791 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-config-data\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.848858 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f5dec8-9ade-45ca-b934-5dece754fc53-logs\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.848895 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqwpz\" (UniqueName: \"kubernetes.io/projected/17f5dec8-9ade-45ca-b934-5dece754fc53-kube-api-access-vqwpz\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.848929 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-combined-ca-bundle\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.848978 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-scripts\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.950757 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-config-data\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.950810 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f5dec8-9ade-45ca-b934-5dece754fc53-logs\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.950838 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqwpz\" (UniqueName: \"kubernetes.io/projected/17f5dec8-9ade-45ca-b934-5dece754fc53-kube-api-access-vqwpz\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.950861 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-combined-ca-bundle\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.950907 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-scripts\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.951326 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f5dec8-9ade-45ca-b934-5dece754fc53-logs\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.958440 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-scripts\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.958760 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-combined-ca-bundle\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.958855 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-config-data\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:06 crc kubenswrapper[4893]: I0128 15:22:06.972203 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqwpz\" (UniqueName: \"kubernetes.io/projected/17f5dec8-9ade-45ca-b934-5dece754fc53-kube-api-access-vqwpz\") pod \"placement-db-sync-8rs6s\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:07 crc kubenswrapper[4893]: I0128 15:22:07.096315 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:07 crc kubenswrapper[4893]: I0128 15:22:07.192690 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-2slc5"] Jan 28 15:22:07 crc kubenswrapper[4893]: I0128 15:22:07.361342 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-8rs6s"] Jan 28 15:22:07 crc kubenswrapper[4893]: W0128 15:22:07.363294 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17f5dec8_9ade_45ca_b934_5dece754fc53.slice/crio-822bbec705bcbef1935e0decde35bbb2560b74a0bdfadd5b95edd9d38d2697fb WatchSource:0}: Error finding container 822bbec705bcbef1935e0decde35bbb2560b74a0bdfadd5b95edd9d38d2697fb: Status 404 returned error can't find the container with id 822bbec705bcbef1935e0decde35bbb2560b74a0bdfadd5b95edd9d38d2697fb Jan 28 15:22:08 crc kubenswrapper[4893]: I0128 15:22:08.133356 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-2slc5" event={"ID":"f2b32f45-cb5d-46a5-acc4-038dd32b09d0","Type":"ContainerStarted","Data":"b1e08cc8aaeb9cef6f269a8af9986252ca87e9c362be8ca3dad63bb61ca2a7a4"} Jan 28 15:22:08 crc kubenswrapper[4893]: I0128 15:22:08.133892 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-2slc5" event={"ID":"f2b32f45-cb5d-46a5-acc4-038dd32b09d0","Type":"ContainerStarted","Data":"9174ca2ef56d3f052203e809012edec57bf9a9f9c4bd7108590b84ef2661d6c5"} Jan 28 15:22:08 crc kubenswrapper[4893]: I0128 15:22:08.136360 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-8rs6s" event={"ID":"17f5dec8-9ade-45ca-b934-5dece754fc53","Type":"ContainerStarted","Data":"822bbec705bcbef1935e0decde35bbb2560b74a0bdfadd5b95edd9d38d2697fb"} Jan 28 15:22:08 crc kubenswrapper[4893]: I0128 15:22:08.156391 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-2slc5" podStartSLOduration=2.1563656079999998 podStartE2EDuration="2.156365608s" podCreationTimestamp="2026-01-28 15:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:08.148290457 +0000 UTC m=+1245.921905505" watchObservedRunningTime="2026-01-28 15:22:08.156365608 +0000 UTC m=+1245.929980636" Jan 28 15:22:11 crc kubenswrapper[4893]: I0128 15:22:11.165394 4893 generic.go:334] "Generic (PLEG): container finished" podID="f2b32f45-cb5d-46a5-acc4-038dd32b09d0" containerID="b1e08cc8aaeb9cef6f269a8af9986252ca87e9c362be8ca3dad63bb61ca2a7a4" exitCode=0 Jan 28 15:22:11 crc kubenswrapper[4893]: I0128 15:22:11.165534 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-2slc5" event={"ID":"f2b32f45-cb5d-46a5-acc4-038dd32b09d0","Type":"ContainerDied","Data":"b1e08cc8aaeb9cef6f269a8af9986252ca87e9c362be8ca3dad63bb61ca2a7a4"} Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.174783 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-8rs6s" event={"ID":"17f5dec8-9ade-45ca-b934-5dece754fc53","Type":"ContainerStarted","Data":"a30b617dbd75a75d68cda3c363008fea48b22f9f9aa2d3535c4a11d0bbee2a4d"} Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.198340 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-db-sync-8rs6s" podStartSLOduration=2.465242892 podStartE2EDuration="6.198320454s" podCreationTimestamp="2026-01-28 15:22:06 +0000 UTC" firstStartedPulling="2026-01-28 15:22:07.365651783 +0000 UTC m=+1245.139266811" lastFinishedPulling="2026-01-28 15:22:11.098729345 +0000 UTC m=+1248.872344373" observedRunningTime="2026-01-28 15:22:12.194695335 +0000 UTC m=+1249.968310383" watchObservedRunningTime="2026-01-28 15:22:12.198320454 +0000 UTC m=+1249.971935472" Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.540300 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.660074 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-scripts\") pod \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.660289 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8d59\" (UniqueName: \"kubernetes.io/projected/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-kube-api-access-x8d59\") pod \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.660353 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-credential-keys\") pod \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.660388 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-combined-ca-bundle\") pod \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.660546 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-config-data\") pod \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.660595 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-fernet-keys\") pod \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\" (UID: \"f2b32f45-cb5d-46a5-acc4-038dd32b09d0\") " Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.667451 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-scripts" (OuterVolumeSpecName: "scripts") pod "f2b32f45-cb5d-46a5-acc4-038dd32b09d0" (UID: "f2b32f45-cb5d-46a5-acc4-038dd32b09d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.667983 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f2b32f45-cb5d-46a5-acc4-038dd32b09d0" (UID: "f2b32f45-cb5d-46a5-acc4-038dd32b09d0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.673051 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-kube-api-access-x8d59" (OuterVolumeSpecName: "kube-api-access-x8d59") pod "f2b32f45-cb5d-46a5-acc4-038dd32b09d0" (UID: "f2b32f45-cb5d-46a5-acc4-038dd32b09d0"). InnerVolumeSpecName "kube-api-access-x8d59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.673172 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f2b32f45-cb5d-46a5-acc4-038dd32b09d0" (UID: "f2b32f45-cb5d-46a5-acc4-038dd32b09d0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.686814 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2b32f45-cb5d-46a5-acc4-038dd32b09d0" (UID: "f2b32f45-cb5d-46a5-acc4-038dd32b09d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.686842 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-config-data" (OuterVolumeSpecName: "config-data") pod "f2b32f45-cb5d-46a5-acc4-038dd32b09d0" (UID: "f2b32f45-cb5d-46a5-acc4-038dd32b09d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.762924 4893 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.762965 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.762978 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8d59\" (UniqueName: \"kubernetes.io/projected/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-kube-api-access-x8d59\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.762993 4893 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.763008 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:12 crc kubenswrapper[4893]: I0128 15:22:12.763018 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b32f45-cb5d-46a5-acc4-038dd32b09d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.185434 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-2slc5" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.185447 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-2slc5" event={"ID":"f2b32f45-cb5d-46a5-acc4-038dd32b09d0","Type":"ContainerDied","Data":"9174ca2ef56d3f052203e809012edec57bf9a9f9c4bd7108590b84ef2661d6c5"} Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.186176 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9174ca2ef56d3f052203e809012edec57bf9a9f9c4bd7108590b84ef2661d6c5" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.187048 4893 generic.go:334] "Generic (PLEG): container finished" podID="17f5dec8-9ade-45ca-b934-5dece754fc53" containerID="a30b617dbd75a75d68cda3c363008fea48b22f9f9aa2d3535c4a11d0bbee2a4d" exitCode=0 Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.187076 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-8rs6s" event={"ID":"17f5dec8-9ade-45ca-b934-5dece754fc53","Type":"ContainerDied","Data":"a30b617dbd75a75d68cda3c363008fea48b22f9f9aa2d3535c4a11d0bbee2a4d"} Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.300875 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-2slc5"] Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.306639 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-2slc5"] Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.355955 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-x5kjk"] Jan 28 15:22:13 crc kubenswrapper[4893]: E0128 15:22:13.356673 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b32f45-cb5d-46a5-acc4-038dd32b09d0" containerName="keystone-bootstrap" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.356749 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b32f45-cb5d-46a5-acc4-038dd32b09d0" containerName="keystone-bootstrap" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.356968 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b32f45-cb5d-46a5-acc4-038dd32b09d0" containerName="keystone-bootstrap" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.357598 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.363624 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.363873 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.364030 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-nqg9h" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.365277 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.366999 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.422949 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-x5kjk"] Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.474492 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-config-data\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.475056 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g467k\" (UniqueName: \"kubernetes.io/projected/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-kube-api-access-g467k\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.475256 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-credential-keys\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.475364 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-combined-ca-bundle\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.475642 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-fernet-keys\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.475860 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-scripts\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.577781 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g467k\" (UniqueName: \"kubernetes.io/projected/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-kube-api-access-g467k\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.577896 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-credential-keys\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.578621 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-combined-ca-bundle\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.578686 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-fernet-keys\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.578729 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-scripts\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.578781 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-config-data\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.583277 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-scripts\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.583303 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-combined-ca-bundle\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.583296 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-credential-keys\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.583743 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-config-data\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.584160 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-fernet-keys\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.597379 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g467k\" (UniqueName: \"kubernetes.io/projected/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-kube-api-access-g467k\") pod \"keystone-bootstrap-x5kjk\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:13 crc kubenswrapper[4893]: I0128 15:22:13.743286 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.185269 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-x5kjk"] Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.456883 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.607004 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-combined-ca-bundle\") pod \"17f5dec8-9ade-45ca-b934-5dece754fc53\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.607075 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-config-data\") pod \"17f5dec8-9ade-45ca-b934-5dece754fc53\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.607187 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f5dec8-9ade-45ca-b934-5dece754fc53-logs\") pod \"17f5dec8-9ade-45ca-b934-5dece754fc53\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.607215 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqwpz\" (UniqueName: \"kubernetes.io/projected/17f5dec8-9ade-45ca-b934-5dece754fc53-kube-api-access-vqwpz\") pod \"17f5dec8-9ade-45ca-b934-5dece754fc53\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.607247 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-scripts\") pod \"17f5dec8-9ade-45ca-b934-5dece754fc53\" (UID: \"17f5dec8-9ade-45ca-b934-5dece754fc53\") " Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.608044 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17f5dec8-9ade-45ca-b934-5dece754fc53-logs" (OuterVolumeSpecName: "logs") pod "17f5dec8-9ade-45ca-b934-5dece754fc53" (UID: "17f5dec8-9ade-45ca-b934-5dece754fc53"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.610952 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-scripts" (OuterVolumeSpecName: "scripts") pod "17f5dec8-9ade-45ca-b934-5dece754fc53" (UID: "17f5dec8-9ade-45ca-b934-5dece754fc53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.613706 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f5dec8-9ade-45ca-b934-5dece754fc53-kube-api-access-vqwpz" (OuterVolumeSpecName: "kube-api-access-vqwpz") pod "17f5dec8-9ade-45ca-b934-5dece754fc53" (UID: "17f5dec8-9ade-45ca-b934-5dece754fc53"). InnerVolumeSpecName "kube-api-access-vqwpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.629798 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17f5dec8-9ade-45ca-b934-5dece754fc53" (UID: "17f5dec8-9ade-45ca-b934-5dece754fc53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.632858 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-config-data" (OuterVolumeSpecName: "config-data") pod "17f5dec8-9ade-45ca-b934-5dece754fc53" (UID: "17f5dec8-9ade-45ca-b934-5dece754fc53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.709487 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.709521 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.709532 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f5dec8-9ade-45ca-b934-5dece754fc53-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.709541 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqwpz\" (UniqueName: \"kubernetes.io/projected/17f5dec8-9ade-45ca-b934-5dece754fc53-kube-api-access-vqwpz\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.709552 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17f5dec8-9ade-45ca-b934-5dece754fc53-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:14 crc kubenswrapper[4893]: I0128 15:22:14.903980 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b32f45-cb5d-46a5-acc4-038dd32b09d0" path="/var/lib/kubelet/pods/f2b32f45-cb5d-46a5-acc4-038dd32b09d0/volumes" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.206034 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-8rs6s" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.206058 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-8rs6s" event={"ID":"17f5dec8-9ade-45ca-b934-5dece754fc53","Type":"ContainerDied","Data":"822bbec705bcbef1935e0decde35bbb2560b74a0bdfadd5b95edd9d38d2697fb"} Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.206163 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="822bbec705bcbef1935e0decde35bbb2560b74a0bdfadd5b95edd9d38d2697fb" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.208154 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-x5kjk" event={"ID":"74a83a0b-ab25-4eaa-90d5-054bdddfadc8","Type":"ContainerStarted","Data":"4b02e3c8525e8f4527efaea187ff34f903a0ea0fedae6e9491ccaf649d44808e"} Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.208225 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-x5kjk" event={"ID":"74a83a0b-ab25-4eaa-90d5-054bdddfadc8","Type":"ContainerStarted","Data":"6ff8f482b85de563f7c9b9a6b9fe3d9f0e829a52fe4abe5bbfc5c6ef87b9c133"} Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.232080 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-x5kjk" podStartSLOduration=2.23199688 podStartE2EDuration="2.23199688s" podCreationTimestamp="2026-01-28 15:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:15.230295504 +0000 UTC m=+1253.003910532" watchObservedRunningTime="2026-01-28 15:22:15.23199688 +0000 UTC m=+1253.005611938" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.326628 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-7dbd979f64-625pv"] Jan 28 15:22:15 crc kubenswrapper[4893]: E0128 15:22:15.326983 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f5dec8-9ade-45ca-b934-5dece754fc53" containerName="placement-db-sync" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.327005 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f5dec8-9ade-45ca-b934-5dece754fc53" containerName="placement-db-sync" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.327183 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f5dec8-9ade-45ca-b934-5dece754fc53" containerName="placement-db-sync" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.328391 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.332346 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-jxv75" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.332593 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.332686 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.344663 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-7dbd979f64-625pv"] Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.423422 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdkvk\" (UniqueName: \"kubernetes.io/projected/092a35ea-0d0f-4538-a702-fcf0a09e3683-kube-api-access-cdkvk\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.423702 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092a35ea-0d0f-4538-a702-fcf0a09e3683-combined-ca-bundle\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.423740 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092a35ea-0d0f-4538-a702-fcf0a09e3683-scripts\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.423782 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/092a35ea-0d0f-4538-a702-fcf0a09e3683-logs\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.424237 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092a35ea-0d0f-4538-a702-fcf0a09e3683-config-data\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.525890 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092a35ea-0d0f-4538-a702-fcf0a09e3683-config-data\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.525952 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdkvk\" (UniqueName: \"kubernetes.io/projected/092a35ea-0d0f-4538-a702-fcf0a09e3683-kube-api-access-cdkvk\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.525987 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092a35ea-0d0f-4538-a702-fcf0a09e3683-combined-ca-bundle\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.526016 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092a35ea-0d0f-4538-a702-fcf0a09e3683-scripts\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.526046 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/092a35ea-0d0f-4538-a702-fcf0a09e3683-logs\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.526634 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/092a35ea-0d0f-4538-a702-fcf0a09e3683-logs\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.531466 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/092a35ea-0d0f-4538-a702-fcf0a09e3683-config-data\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.532072 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/092a35ea-0d0f-4538-a702-fcf0a09e3683-combined-ca-bundle\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.543338 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdkvk\" (UniqueName: \"kubernetes.io/projected/092a35ea-0d0f-4538-a702-fcf0a09e3683-kube-api-access-cdkvk\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.544333 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/092a35ea-0d0f-4538-a702-fcf0a09e3683-scripts\") pod \"placement-7dbd979f64-625pv\" (UID: \"092a35ea-0d0f-4538-a702-fcf0a09e3683\") " pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:15 crc kubenswrapper[4893]: I0128 15:22:15.648631 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:16 crc kubenswrapper[4893]: I0128 15:22:16.071055 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-7dbd979f64-625pv"] Jan 28 15:22:16 crc kubenswrapper[4893]: I0128 15:22:16.220808 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-7dbd979f64-625pv" event={"ID":"092a35ea-0d0f-4538-a702-fcf0a09e3683","Type":"ContainerStarted","Data":"e85272fc59b14e61dc7202965a3023e30cb6fb47f95d5f732d00d33cd60eac3d"} Jan 28 15:22:17 crc kubenswrapper[4893]: I0128 15:22:17.234537 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-7dbd979f64-625pv" event={"ID":"092a35ea-0d0f-4538-a702-fcf0a09e3683","Type":"ContainerStarted","Data":"320d7ab5758ef505fff7c0b92c84c74897b6ea6162e59ddbf4387631e9580078"} Jan 28 15:22:17 crc kubenswrapper[4893]: I0128 15:22:17.234591 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-7dbd979f64-625pv" event={"ID":"092a35ea-0d0f-4538-a702-fcf0a09e3683","Type":"ContainerStarted","Data":"78687b177c2fa883af5d65a006dafd2614a780c4a04861be5da8b0f64a3baa1b"} Jan 28 15:22:17 crc kubenswrapper[4893]: I0128 15:22:17.236862 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:17 crc kubenswrapper[4893]: I0128 15:22:17.269431 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-7dbd979f64-625pv" podStartSLOduration=2.2694036459999998 podStartE2EDuration="2.269403646s" podCreationTimestamp="2026-01-28 15:22:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:17.263440253 +0000 UTC m=+1255.037055281" watchObservedRunningTime="2026-01-28 15:22:17.269403646 +0000 UTC m=+1255.043018684" Jan 28 15:22:18 crc kubenswrapper[4893]: I0128 15:22:18.247867 4893 generic.go:334] "Generic (PLEG): container finished" podID="74a83a0b-ab25-4eaa-90d5-054bdddfadc8" containerID="4b02e3c8525e8f4527efaea187ff34f903a0ea0fedae6e9491ccaf649d44808e" exitCode=0 Jan 28 15:22:18 crc kubenswrapper[4893]: I0128 15:22:18.247975 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-x5kjk" event={"ID":"74a83a0b-ab25-4eaa-90d5-054bdddfadc8","Type":"ContainerDied","Data":"4b02e3c8525e8f4527efaea187ff34f903a0ea0fedae6e9491ccaf649d44808e"} Jan 28 15:22:18 crc kubenswrapper[4893]: I0128 15:22:18.249461 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.544305 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.603932 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-combined-ca-bundle\") pod \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.603992 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-credential-keys\") pod \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.604129 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-fernet-keys\") pod \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.604174 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-scripts\") pod \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.604211 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g467k\" (UniqueName: \"kubernetes.io/projected/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-kube-api-access-g467k\") pod \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.604256 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-config-data\") pod \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\" (UID: \"74a83a0b-ab25-4eaa-90d5-054bdddfadc8\") " Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.609294 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-scripts" (OuterVolumeSpecName: "scripts") pod "74a83a0b-ab25-4eaa-90d5-054bdddfadc8" (UID: "74a83a0b-ab25-4eaa-90d5-054bdddfadc8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.609676 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "74a83a0b-ab25-4eaa-90d5-054bdddfadc8" (UID: "74a83a0b-ab25-4eaa-90d5-054bdddfadc8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.609700 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-kube-api-access-g467k" (OuterVolumeSpecName: "kube-api-access-g467k") pod "74a83a0b-ab25-4eaa-90d5-054bdddfadc8" (UID: "74a83a0b-ab25-4eaa-90d5-054bdddfadc8"). InnerVolumeSpecName "kube-api-access-g467k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.610413 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "74a83a0b-ab25-4eaa-90d5-054bdddfadc8" (UID: "74a83a0b-ab25-4eaa-90d5-054bdddfadc8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.629323 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-config-data" (OuterVolumeSpecName: "config-data") pod "74a83a0b-ab25-4eaa-90d5-054bdddfadc8" (UID: "74a83a0b-ab25-4eaa-90d5-054bdddfadc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.631715 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74a83a0b-ab25-4eaa-90d5-054bdddfadc8" (UID: "74a83a0b-ab25-4eaa-90d5-054bdddfadc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.706169 4893 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.706212 4893 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.706222 4893 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.706233 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.706243 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g467k\" (UniqueName: \"kubernetes.io/projected/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-kube-api-access-g467k\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:19 crc kubenswrapper[4893]: I0128 15:22:19.706309 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a83a0b-ab25-4eaa-90d5-054bdddfadc8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.267744 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-x5kjk" event={"ID":"74a83a0b-ab25-4eaa-90d5-054bdddfadc8","Type":"ContainerDied","Data":"6ff8f482b85de563f7c9b9a6b9fe3d9f0e829a52fe4abe5bbfc5c6ef87b9c133"} Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.268110 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ff8f482b85de563f7c9b9a6b9fe3d9f0e829a52fe4abe5bbfc5c6ef87b9c133" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.267819 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-x5kjk" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.352216 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-86bf966444-cll8k"] Jan 28 15:22:20 crc kubenswrapper[4893]: E0128 15:22:20.352654 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a83a0b-ab25-4eaa-90d5-054bdddfadc8" containerName="keystone-bootstrap" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.352679 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a83a0b-ab25-4eaa-90d5-054bdddfadc8" containerName="keystone-bootstrap" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.352868 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a83a0b-ab25-4eaa-90d5-054bdddfadc8" containerName="keystone-bootstrap" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.353510 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.357268 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.357412 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.357691 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.359199 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-nqg9h" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.375340 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-86bf966444-cll8k"] Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.418427 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-config-data\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.418510 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-combined-ca-bundle\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.418606 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-credential-keys\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.418656 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-scripts\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.418735 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw9nl\" (UniqueName: \"kubernetes.io/projected/a79fa730-be33-48f7-9ef0-7964e2afbede-kube-api-access-xw9nl\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.418781 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-fernet-keys\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.520728 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-fernet-keys\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.520796 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-config-data\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.520816 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-combined-ca-bundle\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.520878 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-credential-keys\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.520932 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-scripts\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.521036 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw9nl\" (UniqueName: \"kubernetes.io/projected/a79fa730-be33-48f7-9ef0-7964e2afbede-kube-api-access-xw9nl\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.527363 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-scripts\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.527996 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-credential-keys\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.528068 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-fernet-keys\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.528583 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-combined-ca-bundle\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.532276 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a79fa730-be33-48f7-9ef0-7964e2afbede-config-data\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.536901 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw9nl\" (UniqueName: \"kubernetes.io/projected/a79fa730-be33-48f7-9ef0-7964e2afbede-kube-api-access-xw9nl\") pod \"keystone-86bf966444-cll8k\" (UID: \"a79fa730-be33-48f7-9ef0-7964e2afbede\") " pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:20 crc kubenswrapper[4893]: I0128 15:22:20.683413 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:21 crc kubenswrapper[4893]: I0128 15:22:21.091828 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-86bf966444-cll8k"] Jan 28 15:22:21 crc kubenswrapper[4893]: I0128 15:22:21.276955 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-86bf966444-cll8k" event={"ID":"a79fa730-be33-48f7-9ef0-7964e2afbede","Type":"ContainerStarted","Data":"94075377cb7ebf28c2548f890afe3f691b2e5f51b6591a6b3e7ce250a96fdf47"} Jan 28 15:22:22 crc kubenswrapper[4893]: I0128 15:22:22.285296 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-86bf966444-cll8k" event={"ID":"a79fa730-be33-48f7-9ef0-7964e2afbede","Type":"ContainerStarted","Data":"54a535e22c8ea01e5e21b8bec37f17e1c6095a3ee00c546e1a4cb9563250a33a"} Jan 28 15:22:22 crc kubenswrapper[4893]: I0128 15:22:22.286798 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:22 crc kubenswrapper[4893]: I0128 15:22:22.303131 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-86bf966444-cll8k" podStartSLOduration=2.303112429 podStartE2EDuration="2.303112429s" podCreationTimestamp="2026-01-28 15:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:22.302607706 +0000 UTC m=+1260.076222744" watchObservedRunningTime="2026-01-28 15:22:22.303112429 +0000 UTC m=+1260.076727457" Jan 28 15:22:35 crc kubenswrapper[4893]: I0128 15:22:35.722359 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:22:35 crc kubenswrapper[4893]: I0128 15:22:35.722862 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:22:35 crc kubenswrapper[4893]: I0128 15:22:35.722909 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:22:35 crc kubenswrapper[4893]: I0128 15:22:35.723335 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eaa47c5c31906ab74e7bc044988a1088092bc8e70af984b1414760728f1c9f6e"} pod="openshift-machine-config-operator/machine-config-daemon-l2nht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:22:35 crc kubenswrapper[4893]: I0128 15:22:35.723390 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" containerID="cri-o://eaa47c5c31906ab74e7bc044988a1088092bc8e70af984b1414760728f1c9f6e" gracePeriod=600 Jan 28 15:22:36 crc kubenswrapper[4893]: I0128 15:22:36.399596 4893 generic.go:334] "Generic (PLEG): container finished" podID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerID="eaa47c5c31906ab74e7bc044988a1088092bc8e70af984b1414760728f1c9f6e" exitCode=0 Jan 28 15:22:36 crc kubenswrapper[4893]: I0128 15:22:36.399660 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerDied","Data":"eaa47c5c31906ab74e7bc044988a1088092bc8e70af984b1414760728f1c9f6e"} Jan 28 15:22:36 crc kubenswrapper[4893]: I0128 15:22:36.399958 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerStarted","Data":"d8e5d57be71719656edc4624e7904c0b8f16b72637bcea1f2d833d180bb5c4bd"} Jan 28 15:22:36 crc kubenswrapper[4893]: I0128 15:22:36.399990 4893 scope.go:117] "RemoveContainer" containerID="e5fb5a1f3773928c39eda437a9e56f4ecca599067083a7fd3baff85989507ed7" Jan 28 15:22:46 crc kubenswrapper[4893]: I0128 15:22:46.761178 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:46 crc kubenswrapper[4893]: I0128 15:22:46.804152 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-7dbd979f64-625pv" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.213518 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/keystone-86bf966444-cll8k" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.564993 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.566578 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.572718 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.600911 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstackclient-openstackclient-dockercfg-bfb9x" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.601033 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.601397 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-config-secret" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.645614 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crsp7\" (UniqueName: \"kubernetes.io/projected/5a7bef9d-825c-491a-887c-651ea4b6ca59-kube-api-access-crsp7\") pod \"openstackclient\" (UID: \"5a7bef9d-825c-491a-887c-651ea4b6ca59\") " pod="nova-kuttl-default/openstackclient" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.645674 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7bef9d-825c-491a-887c-651ea4b6ca59-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5a7bef9d-825c-491a-887c-651ea4b6ca59\") " pod="nova-kuttl-default/openstackclient" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.645724 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a7bef9d-825c-491a-887c-651ea4b6ca59-openstack-config-secret\") pod \"openstackclient\" (UID: \"5a7bef9d-825c-491a-887c-651ea4b6ca59\") " pod="nova-kuttl-default/openstackclient" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.645761 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a7bef9d-825c-491a-887c-651ea4b6ca59-openstack-config\") pod \"openstackclient\" (UID: \"5a7bef9d-825c-491a-887c-651ea4b6ca59\") " pod="nova-kuttl-default/openstackclient" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.747129 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a7bef9d-825c-491a-887c-651ea4b6ca59-openstack-config\") pod \"openstackclient\" (UID: \"5a7bef9d-825c-491a-887c-651ea4b6ca59\") " pod="nova-kuttl-default/openstackclient" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.747233 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crsp7\" (UniqueName: \"kubernetes.io/projected/5a7bef9d-825c-491a-887c-651ea4b6ca59-kube-api-access-crsp7\") pod \"openstackclient\" (UID: \"5a7bef9d-825c-491a-887c-651ea4b6ca59\") " pod="nova-kuttl-default/openstackclient" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.747261 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7bef9d-825c-491a-887c-651ea4b6ca59-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5a7bef9d-825c-491a-887c-651ea4b6ca59\") " pod="nova-kuttl-default/openstackclient" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.747300 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a7bef9d-825c-491a-887c-651ea4b6ca59-openstack-config-secret\") pod \"openstackclient\" (UID: \"5a7bef9d-825c-491a-887c-651ea4b6ca59\") " pod="nova-kuttl-default/openstackclient" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.748903 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a7bef9d-825c-491a-887c-651ea4b6ca59-openstack-config\") pod \"openstackclient\" (UID: \"5a7bef9d-825c-491a-887c-651ea4b6ca59\") " pod="nova-kuttl-default/openstackclient" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.758004 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a7bef9d-825c-491a-887c-651ea4b6ca59-openstack-config-secret\") pod \"openstackclient\" (UID: \"5a7bef9d-825c-491a-887c-651ea4b6ca59\") " pod="nova-kuttl-default/openstackclient" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.762666 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a7bef9d-825c-491a-887c-651ea4b6ca59-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5a7bef9d-825c-491a-887c-651ea4b6ca59\") " pod="nova-kuttl-default/openstackclient" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.767783 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crsp7\" (UniqueName: \"kubernetes.io/projected/5a7bef9d-825c-491a-887c-651ea4b6ca59-kube-api-access-crsp7\") pod \"openstackclient\" (UID: \"5a7bef9d-825c-491a-887c-651ea4b6ca59\") " pod="nova-kuttl-default/openstackclient" Jan 28 15:22:52 crc kubenswrapper[4893]: I0128 15:22:52.924946 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 28 15:22:53 crc kubenswrapper[4893]: I0128 15:22:53.377707 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 28 15:22:53 crc kubenswrapper[4893]: I0128 15:22:53.532563 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"5a7bef9d-825c-491a-887c-651ea4b6ca59","Type":"ContainerStarted","Data":"d11709042c2d85ded484711a3c86e61500781820e6f93d124f5e1bf58b0324b5"} Jan 28 15:23:01 crc kubenswrapper[4893]: I0128 15:23:01.596687 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"5a7bef9d-825c-491a-887c-651ea4b6ca59","Type":"ContainerStarted","Data":"1d178df1f6cc38a0cd3d94de8e83c18ceaa993c4d5a786739026d79e80bf1655"} Jan 28 15:23:14 crc kubenswrapper[4893]: I0128 15:23:14.987104 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstackclient" podStartSLOduration=15.739148862 podStartE2EDuration="22.987084851s" podCreationTimestamp="2026-01-28 15:22:52 +0000 UTC" firstStartedPulling="2026-01-28 15:22:53.394360943 +0000 UTC m=+1291.167975971" lastFinishedPulling="2026-01-28 15:23:00.642296932 +0000 UTC m=+1298.415911960" observedRunningTime="2026-01-28 15:23:01.613403137 +0000 UTC m=+1299.387018155" watchObservedRunningTime="2026-01-28 15:23:14.987084851 +0000 UTC m=+1312.760699879" Jan 28 15:23:14 crc kubenswrapper[4893]: I0128 15:23:14.994599 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg"] Jan 28 15:23:14 crc kubenswrapper[4893]: I0128 15:23:14.994886 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" podUID="2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1" containerName="operator" containerID="cri-o://3003fd0eea2feb9d34df73fe20ac1d0c577d38d6a4ccff8e130363ca9ca27033" gracePeriod=10 Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.174161 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q"] Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.174394 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" podUID="e130bc9f-0869-42a0-922b-db361e6b26f3" containerName="manager" containerID="cri-o://b8013829a44f75ba0b541f002acd9e81a74b3b2d38a28b692babfa35289b5eed" gracePeriod=10 Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.465741 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.538708 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mscz9\" (UniqueName: \"kubernetes.io/projected/2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1-kube-api-access-mscz9\") pod \"2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1\" (UID: \"2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1\") " Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.552922 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1-kube-api-access-mscz9" (OuterVolumeSpecName: "kube-api-access-mscz9") pod "2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1" (UID: "2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1"). InnerVolumeSpecName "kube-api-access-mscz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.590401 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.641360 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hmlh\" (UniqueName: \"kubernetes.io/projected/e130bc9f-0869-42a0-922b-db361e6b26f3-kube-api-access-2hmlh\") pod \"e130bc9f-0869-42a0-922b-db361e6b26f3\" (UID: \"e130bc9f-0869-42a0-922b-db361e6b26f3\") " Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.642455 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mscz9\" (UniqueName: \"kubernetes.io/projected/2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1-kube-api-access-mscz9\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.646871 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e130bc9f-0869-42a0-922b-db361e6b26f3-kube-api-access-2hmlh" (OuterVolumeSpecName: "kube-api-access-2hmlh") pod "e130bc9f-0869-42a0-922b-db361e6b26f3" (UID: "e130bc9f-0869-42a0-922b-db361e6b26f3"). InnerVolumeSpecName "kube-api-access-2hmlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.706668 4893 generic.go:334] "Generic (PLEG): container finished" podID="2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1" containerID="3003fd0eea2feb9d34df73fe20ac1d0c577d38d6a4ccff8e130363ca9ca27033" exitCode=0 Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.706740 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.706787 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" event={"ID":"2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1","Type":"ContainerDied","Data":"3003fd0eea2feb9d34df73fe20ac1d0c577d38d6a4ccff8e130363ca9ca27033"} Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.706828 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg" event={"ID":"2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1","Type":"ContainerDied","Data":"b3953e60d7bba752666f09fe03db4c2956e1315163b9f38c7b1d87159bf8c68a"} Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.706851 4893 scope.go:117] "RemoveContainer" containerID="3003fd0eea2feb9d34df73fe20ac1d0c577d38d6a4ccff8e130363ca9ca27033" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.709263 4893 generic.go:334] "Generic (PLEG): container finished" podID="e130bc9f-0869-42a0-922b-db361e6b26f3" containerID="b8013829a44f75ba0b541f002acd9e81a74b3b2d38a28b692babfa35289b5eed" exitCode=0 Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.709302 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" event={"ID":"e130bc9f-0869-42a0-922b-db361e6b26f3","Type":"ContainerDied","Data":"b8013829a44f75ba0b541f002acd9e81a74b3b2d38a28b692babfa35289b5eed"} Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.709341 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" event={"ID":"e130bc9f-0869-42a0-922b-db361e6b26f3","Type":"ContainerDied","Data":"3afa2597ebc877a41bab02a9cf8a44d3a4eba33e1c757a2a9e987cd5b47842e4"} Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.709421 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.730935 4893 scope.go:117] "RemoveContainer" containerID="3003fd0eea2feb9d34df73fe20ac1d0c577d38d6a4ccff8e130363ca9ca27033" Jan 28 15:23:15 crc kubenswrapper[4893]: E0128 15:23:15.731435 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3003fd0eea2feb9d34df73fe20ac1d0c577d38d6a4ccff8e130363ca9ca27033\": container with ID starting with 3003fd0eea2feb9d34df73fe20ac1d0c577d38d6a4ccff8e130363ca9ca27033 not found: ID does not exist" containerID="3003fd0eea2feb9d34df73fe20ac1d0c577d38d6a4ccff8e130363ca9ca27033" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.731469 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3003fd0eea2feb9d34df73fe20ac1d0c577d38d6a4ccff8e130363ca9ca27033"} err="failed to get container status \"3003fd0eea2feb9d34df73fe20ac1d0c577d38d6a4ccff8e130363ca9ca27033\": rpc error: code = NotFound desc = could not find container \"3003fd0eea2feb9d34df73fe20ac1d0c577d38d6a4ccff8e130363ca9ca27033\": container with ID starting with 3003fd0eea2feb9d34df73fe20ac1d0c577d38d6a4ccff8e130363ca9ca27033 not found: ID does not exist" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.731507 4893 scope.go:117] "RemoveContainer" containerID="b8013829a44f75ba0b541f002acd9e81a74b3b2d38a28b692babfa35289b5eed" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.743572 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hmlh\" (UniqueName: \"kubernetes.io/projected/e130bc9f-0869-42a0-922b-db361e6b26f3-kube-api-access-2hmlh\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.754662 4893 scope.go:117] "RemoveContainer" containerID="b8013829a44f75ba0b541f002acd9e81a74b3b2d38a28b692babfa35289b5eed" Jan 28 15:23:15 crc kubenswrapper[4893]: E0128 15:23:15.755118 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8013829a44f75ba0b541f002acd9e81a74b3b2d38a28b692babfa35289b5eed\": container with ID starting with b8013829a44f75ba0b541f002acd9e81a74b3b2d38a28b692babfa35289b5eed not found: ID does not exist" containerID="b8013829a44f75ba0b541f002acd9e81a74b3b2d38a28b692babfa35289b5eed" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.755182 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8013829a44f75ba0b541f002acd9e81a74b3b2d38a28b692babfa35289b5eed"} err="failed to get container status \"b8013829a44f75ba0b541f002acd9e81a74b3b2d38a28b692babfa35289b5eed\": rpc error: code = NotFound desc = could not find container \"b8013829a44f75ba0b541f002acd9e81a74b3b2d38a28b692babfa35289b5eed\": container with ID starting with b8013829a44f75ba0b541f002acd9e81a74b3b2d38a28b692babfa35289b5eed not found: ID does not exist" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.935662 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-99pss"] Jan 28 15:23:15 crc kubenswrapper[4893]: E0128 15:23:15.936118 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1" containerName="operator" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.936133 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1" containerName="operator" Jan 28 15:23:15 crc kubenswrapper[4893]: E0128 15:23:15.936162 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e130bc9f-0869-42a0-922b-db361e6b26f3" containerName="manager" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.936168 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e130bc9f-0869-42a0-922b-db361e6b26f3" containerName="manager" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.936334 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1" containerName="operator" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.936348 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="e130bc9f-0869-42a0-922b-db361e6b26f3" containerName="manager" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.936934 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-99pss" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.944170 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-index-dockercfg-qxfmz" Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.957234 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-99pss"] Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.967961 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg"] Jan 28 15:23:15 crc kubenswrapper[4893]: I0128 15:23:15.974332 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cdf9dd67-8gqfg"] Jan 28 15:23:16 crc kubenswrapper[4893]: I0128 15:23:16.007781 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q"] Jan 28 15:23:16 crc kubenswrapper[4893]: I0128 15:23:16.019613 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-controller-manager-75d84bc6b9-s5v4q"] Jan 28 15:23:16 crc kubenswrapper[4893]: I0128 15:23:16.149687 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xv8k\" (UniqueName: \"kubernetes.io/projected/04786631-a21b-4006-ab43-c98ac66a34cb-kube-api-access-2xv8k\") pod \"nova-operator-index-99pss\" (UID: \"04786631-a21b-4006-ab43-c98ac66a34cb\") " pod="openstack-operators/nova-operator-index-99pss" Jan 28 15:23:16 crc kubenswrapper[4893]: I0128 15:23:16.251107 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xv8k\" (UniqueName: \"kubernetes.io/projected/04786631-a21b-4006-ab43-c98ac66a34cb-kube-api-access-2xv8k\") pod \"nova-operator-index-99pss\" (UID: \"04786631-a21b-4006-ab43-c98ac66a34cb\") " pod="openstack-operators/nova-operator-index-99pss" Jan 28 15:23:16 crc kubenswrapper[4893]: I0128 15:23:16.282515 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xv8k\" (UniqueName: \"kubernetes.io/projected/04786631-a21b-4006-ab43-c98ac66a34cb-kube-api-access-2xv8k\") pod \"nova-operator-index-99pss\" (UID: \"04786631-a21b-4006-ab43-c98ac66a34cb\") " pod="openstack-operators/nova-operator-index-99pss" Jan 28 15:23:16 crc kubenswrapper[4893]: I0128 15:23:16.557334 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-99pss" Jan 28 15:23:16 crc kubenswrapper[4893]: I0128 15:23:16.905146 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1" path="/var/lib/kubelet/pods/2da9b6a3-e6da-4a7d-8bf2-98d9e50ea1e1/volumes" Jan 28 15:23:16 crc kubenswrapper[4893]: I0128 15:23:16.906615 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e130bc9f-0869-42a0-922b-db361e6b26f3" path="/var/lib/kubelet/pods/e130bc9f-0869-42a0-922b-db361e6b26f3/volumes" Jan 28 15:23:17 crc kubenswrapper[4893]: I0128 15:23:17.008873 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-99pss"] Jan 28 15:23:17 crc kubenswrapper[4893]: W0128 15:23:17.010931 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04786631_a21b_4006_ab43_c98ac66a34cb.slice/crio-53aa532953169171571622e9eaa8c470456295b82d0dbb8cfd67b4ad46dfd248 WatchSource:0}: Error finding container 53aa532953169171571622e9eaa8c470456295b82d0dbb8cfd67b4ad46dfd248: Status 404 returned error can't find the container with id 53aa532953169171571622e9eaa8c470456295b82d0dbb8cfd67b4ad46dfd248 Jan 28 15:23:17 crc kubenswrapper[4893]: I0128 15:23:17.731959 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-99pss" event={"ID":"04786631-a21b-4006-ab43-c98ac66a34cb","Type":"ContainerStarted","Data":"7a8dc4aa30d5afbd0202f754c1a228006ec7551129ee422206d332c4b8ab0421"} Jan 28 15:23:17 crc kubenswrapper[4893]: I0128 15:23:17.732012 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-99pss" event={"ID":"04786631-a21b-4006-ab43-c98ac66a34cb","Type":"ContainerStarted","Data":"53aa532953169171571622e9eaa8c470456295b82d0dbb8cfd67b4ad46dfd248"} Jan 28 15:23:17 crc kubenswrapper[4893]: I0128 15:23:17.752885 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-99pss" podStartSLOduration=2.393710695 podStartE2EDuration="2.752854063s" podCreationTimestamp="2026-01-28 15:23:15 +0000 UTC" firstStartedPulling="2026-01-28 15:23:17.014510735 +0000 UTC m=+1314.788125773" lastFinishedPulling="2026-01-28 15:23:17.373654113 +0000 UTC m=+1315.147269141" observedRunningTime="2026-01-28 15:23:17.752160255 +0000 UTC m=+1315.525775273" watchObservedRunningTime="2026-01-28 15:23:17.752854063 +0000 UTC m=+1315.526469101" Jan 28 15:23:26 crc kubenswrapper[4893]: I0128 15:23:26.559720 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-index-99pss" Jan 28 15:23:26 crc kubenswrapper[4893]: I0128 15:23:26.560352 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/nova-operator-index-99pss" Jan 28 15:23:26 crc kubenswrapper[4893]: I0128 15:23:26.591564 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/nova-operator-index-99pss" Jan 28 15:23:26 crc kubenswrapper[4893]: I0128 15:23:26.821071 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-index-99pss" Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.126818 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l"] Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.129462 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.134365 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-srhb4" Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.140894 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l"] Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.224043 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26q88\" (UniqueName: \"kubernetes.io/projected/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-kube-api-access-26q88\") pod \"35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l\" (UID: \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\") " pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.224168 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-util\") pod \"35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l\" (UID: \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\") " pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.224308 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-bundle\") pod \"35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l\" (UID: \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\") " pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.326027 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-bundle\") pod \"35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l\" (UID: \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\") " pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.326151 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26q88\" (UniqueName: \"kubernetes.io/projected/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-kube-api-access-26q88\") pod \"35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l\" (UID: \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\") " pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.326216 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-util\") pod \"35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l\" (UID: \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\") " pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.326733 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-bundle\") pod \"35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l\" (UID: \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\") " pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.326779 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-util\") pod \"35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l\" (UID: \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\") " pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.350885 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26q88\" (UniqueName: \"kubernetes.io/projected/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-kube-api-access-26q88\") pod \"35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l\" (UID: \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\") " pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.453647 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.830912 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l"] Jan 28 15:23:35 crc kubenswrapper[4893]: I0128 15:23:35.862678 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" event={"ID":"e0602f55-847f-4987-ba4c-9aa5fb47ad7d","Type":"ContainerStarted","Data":"923a323fc4daacdc304c0a057f7b3492907485dfad95a4a71c9c747173ae9a3c"} Jan 28 15:23:36 crc kubenswrapper[4893]: I0128 15:23:36.871225 4893 generic.go:334] "Generic (PLEG): container finished" podID="e0602f55-847f-4987-ba4c-9aa5fb47ad7d" containerID="c99bae892da759a198abd0e9e5793ad4ac0d6047876cd9da9ca923d51cdc24c6" exitCode=0 Jan 28 15:23:36 crc kubenswrapper[4893]: I0128 15:23:36.871280 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" event={"ID":"e0602f55-847f-4987-ba4c-9aa5fb47ad7d","Type":"ContainerDied","Data":"c99bae892da759a198abd0e9e5793ad4ac0d6047876cd9da9ca923d51cdc24c6"} Jan 28 15:23:37 crc kubenswrapper[4893]: I0128 15:23:37.881269 4893 generic.go:334] "Generic (PLEG): container finished" podID="e0602f55-847f-4987-ba4c-9aa5fb47ad7d" containerID="bbbf7dd94beed6bb7b050e4fa3d3393c9a005e1a75dc4e882f95e64c0ada9f26" exitCode=0 Jan 28 15:23:37 crc kubenswrapper[4893]: I0128 15:23:37.881318 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" event={"ID":"e0602f55-847f-4987-ba4c-9aa5fb47ad7d","Type":"ContainerDied","Data":"bbbf7dd94beed6bb7b050e4fa3d3393c9a005e1a75dc4e882f95e64c0ada9f26"} Jan 28 15:23:38 crc kubenswrapper[4893]: I0128 15:23:38.893744 4893 generic.go:334] "Generic (PLEG): container finished" podID="e0602f55-847f-4987-ba4c-9aa5fb47ad7d" containerID="6f98f72df7748447fa8e7ba00773128dbf46bcf21a8eb2b4531f189910e85d17" exitCode=0 Jan 28 15:23:38 crc kubenswrapper[4893]: I0128 15:23:38.902626 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" event={"ID":"e0602f55-847f-4987-ba4c-9aa5fb47ad7d","Type":"ContainerDied","Data":"6f98f72df7748447fa8e7ba00773128dbf46bcf21a8eb2b4531f189910e85d17"} Jan 28 15:23:40 crc kubenswrapper[4893]: I0128 15:23:40.268954 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" Jan 28 15:23:40 crc kubenswrapper[4893]: I0128 15:23:40.335312 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26q88\" (UniqueName: \"kubernetes.io/projected/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-kube-api-access-26q88\") pod \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\" (UID: \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\") " Jan 28 15:23:40 crc kubenswrapper[4893]: I0128 15:23:40.335365 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-util\") pod \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\" (UID: \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\") " Jan 28 15:23:40 crc kubenswrapper[4893]: I0128 15:23:40.335500 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-bundle\") pod \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\" (UID: \"e0602f55-847f-4987-ba4c-9aa5fb47ad7d\") " Jan 28 15:23:40 crc kubenswrapper[4893]: I0128 15:23:40.337672 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-bundle" (OuterVolumeSpecName: "bundle") pod "e0602f55-847f-4987-ba4c-9aa5fb47ad7d" (UID: "e0602f55-847f-4987-ba4c-9aa5fb47ad7d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:40 crc kubenswrapper[4893]: I0128 15:23:40.343577 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-kube-api-access-26q88" (OuterVolumeSpecName: "kube-api-access-26q88") pod "e0602f55-847f-4987-ba4c-9aa5fb47ad7d" (UID: "e0602f55-847f-4987-ba4c-9aa5fb47ad7d"). InnerVolumeSpecName "kube-api-access-26q88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:40 crc kubenswrapper[4893]: I0128 15:23:40.349894 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-util" (OuterVolumeSpecName: "util") pod "e0602f55-847f-4987-ba4c-9aa5fb47ad7d" (UID: "e0602f55-847f-4987-ba4c-9aa5fb47ad7d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:40 crc kubenswrapper[4893]: I0128 15:23:40.436898 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26q88\" (UniqueName: \"kubernetes.io/projected/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-kube-api-access-26q88\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:40 crc kubenswrapper[4893]: I0128 15:23:40.436933 4893 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:40 crc kubenswrapper[4893]: I0128 15:23:40.436943 4893 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0602f55-847f-4987-ba4c-9aa5fb47ad7d-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:40 crc kubenswrapper[4893]: I0128 15:23:40.927960 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" Jan 28 15:23:40 crc kubenswrapper[4893]: I0128 15:23:40.935239 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l" event={"ID":"e0602f55-847f-4987-ba4c-9aa5fb47ad7d","Type":"ContainerDied","Data":"923a323fc4daacdc304c0a057f7b3492907485dfad95a4a71c9c747173ae9a3c"} Jan 28 15:23:40 crc kubenswrapper[4893]: I0128 15:23:40.935301 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="923a323fc4daacdc304c0a057f7b3492907485dfad95a4a71c9c747173ae9a3c" Jan 28 15:23:43 crc kubenswrapper[4893]: I0128 15:23:43.896163 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q"] Jan 28 15:23:43 crc kubenswrapper[4893]: E0128 15:23:43.897422 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0602f55-847f-4987-ba4c-9aa5fb47ad7d" containerName="pull" Jan 28 15:23:43 crc kubenswrapper[4893]: I0128 15:23:43.897444 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0602f55-847f-4987-ba4c-9aa5fb47ad7d" containerName="pull" Jan 28 15:23:43 crc kubenswrapper[4893]: E0128 15:23:43.897493 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0602f55-847f-4987-ba4c-9aa5fb47ad7d" containerName="util" Jan 28 15:23:43 crc kubenswrapper[4893]: I0128 15:23:43.897502 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0602f55-847f-4987-ba4c-9aa5fb47ad7d" containerName="util" Jan 28 15:23:43 crc kubenswrapper[4893]: E0128 15:23:43.897524 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0602f55-847f-4987-ba4c-9aa5fb47ad7d" containerName="extract" Jan 28 15:23:43 crc kubenswrapper[4893]: I0128 15:23:43.897531 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0602f55-847f-4987-ba4c-9aa5fb47ad7d" containerName="extract" Jan 28 15:23:43 crc kubenswrapper[4893]: I0128 15:23:43.897729 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0602f55-847f-4987-ba4c-9aa5fb47ad7d" containerName="extract" Jan 28 15:23:43 crc kubenswrapper[4893]: I0128 15:23:43.898508 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" Jan 28 15:23:43 crc kubenswrapper[4893]: I0128 15:23:43.900741 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-service-cert" Jan 28 15:23:43 crc kubenswrapper[4893]: I0128 15:23:43.902830 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-hzg2w" Jan 28 15:23:43 crc kubenswrapper[4893]: I0128 15:23:43.917995 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q"] Jan 28 15:23:43 crc kubenswrapper[4893]: I0128 15:23:43.994568 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f1e8a13-7c32-4990-b658-0985329d5811-webhook-cert\") pod \"nova-operator-controller-manager-78947fbfb8-7gj7q\" (UID: \"6f1e8a13-7c32-4990-b658-0985329d5811\") " pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" Jan 28 15:23:43 crc kubenswrapper[4893]: I0128 15:23:43.995040 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6f1e8a13-7c32-4990-b658-0985329d5811-apiservice-cert\") pod \"nova-operator-controller-manager-78947fbfb8-7gj7q\" (UID: \"6f1e8a13-7c32-4990-b658-0985329d5811\") " pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" Jan 28 15:23:43 crc kubenswrapper[4893]: I0128 15:23:43.995239 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89rnn\" (UniqueName: \"kubernetes.io/projected/6f1e8a13-7c32-4990-b658-0985329d5811-kube-api-access-89rnn\") pod \"nova-operator-controller-manager-78947fbfb8-7gj7q\" (UID: \"6f1e8a13-7c32-4990-b658-0985329d5811\") " pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" Jan 28 15:23:44 crc kubenswrapper[4893]: I0128 15:23:44.103573 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89rnn\" (UniqueName: \"kubernetes.io/projected/6f1e8a13-7c32-4990-b658-0985329d5811-kube-api-access-89rnn\") pod \"nova-operator-controller-manager-78947fbfb8-7gj7q\" (UID: \"6f1e8a13-7c32-4990-b658-0985329d5811\") " pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" Jan 28 15:23:44 crc kubenswrapper[4893]: I0128 15:23:44.103671 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f1e8a13-7c32-4990-b658-0985329d5811-webhook-cert\") pod \"nova-operator-controller-manager-78947fbfb8-7gj7q\" (UID: \"6f1e8a13-7c32-4990-b658-0985329d5811\") " pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" Jan 28 15:23:44 crc kubenswrapper[4893]: I0128 15:23:44.103733 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6f1e8a13-7c32-4990-b658-0985329d5811-apiservice-cert\") pod \"nova-operator-controller-manager-78947fbfb8-7gj7q\" (UID: \"6f1e8a13-7c32-4990-b658-0985329d5811\") " pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" Jan 28 15:23:44 crc kubenswrapper[4893]: I0128 15:23:44.121856 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6f1e8a13-7c32-4990-b658-0985329d5811-webhook-cert\") pod \"nova-operator-controller-manager-78947fbfb8-7gj7q\" (UID: \"6f1e8a13-7c32-4990-b658-0985329d5811\") " pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" Jan 28 15:23:44 crc kubenswrapper[4893]: I0128 15:23:44.134508 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6f1e8a13-7c32-4990-b658-0985329d5811-apiservice-cert\") pod \"nova-operator-controller-manager-78947fbfb8-7gj7q\" (UID: \"6f1e8a13-7c32-4990-b658-0985329d5811\") " pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" Jan 28 15:23:44 crc kubenswrapper[4893]: I0128 15:23:44.156409 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89rnn\" (UniqueName: \"kubernetes.io/projected/6f1e8a13-7c32-4990-b658-0985329d5811-kube-api-access-89rnn\") pod \"nova-operator-controller-manager-78947fbfb8-7gj7q\" (UID: \"6f1e8a13-7c32-4990-b658-0985329d5811\") " pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" Jan 28 15:23:44 crc kubenswrapper[4893]: I0128 15:23:44.218382 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" Jan 28 15:23:44 crc kubenswrapper[4893]: I0128 15:23:44.743171 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q"] Jan 28 15:23:44 crc kubenswrapper[4893]: I0128 15:23:44.989705 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" event={"ID":"6f1e8a13-7c32-4990-b658-0985329d5811","Type":"ContainerStarted","Data":"8b3dcc95501ad879b8a8a0fc1f1729bb523fcb557588b0574416ba7cfa32edff"} Jan 28 15:23:44 crc kubenswrapper[4893]: I0128 15:23:44.990064 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" event={"ID":"6f1e8a13-7c32-4990-b658-0985329d5811","Type":"ContainerStarted","Data":"d4187cc037de55036874b42809a3894d406ea66954d2cd9e393f90d98cbb7cd3"} Jan 28 15:23:44 crc kubenswrapper[4893]: I0128 15:23:44.990208 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" Jan 28 15:23:45 crc kubenswrapper[4893]: I0128 15:23:45.007697 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" podStartSLOduration=2.00767615 podStartE2EDuration="2.00767615s" podCreationTimestamp="2026-01-28 15:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:45.005197475 +0000 UTC m=+1342.778812503" watchObservedRunningTime="2026-01-28 15:23:45.00767615 +0000 UTC m=+1342.781291178" Jan 28 15:23:54 crc kubenswrapper[4893]: I0128 15:23:54.228408 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-78947fbfb8-7gj7q" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.659276 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-lkskx"] Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.660895 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-lkskx" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.681519 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-lkskx"] Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.716202 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439767c7-ee3b-4574-979b-9d59e1018a5e-operator-scripts\") pod \"nova-api-db-create-lkskx\" (UID: \"439767c7-ee3b-4574-979b-9d59e1018a5e\") " pod="nova-kuttl-default/nova-api-db-create-lkskx" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.716330 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnrqr\" (UniqueName: \"kubernetes.io/projected/439767c7-ee3b-4574-979b-9d59e1018a5e-kube-api-access-fnrqr\") pod \"nova-api-db-create-lkskx\" (UID: \"439767c7-ee3b-4574-979b-9d59e1018a5e\") " pod="nova-kuttl-default/nova-api-db-create-lkskx" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.751917 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-jwsh9"] Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.753152 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.764280 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-jwsh9"] Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.818754 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrr9m\" (UniqueName: \"kubernetes.io/projected/d4070239-a360-41b6-b1c1-27ca8e2c901d-kube-api-access-hrr9m\") pod \"nova-cell0-db-create-jwsh9\" (UID: \"d4070239-a360-41b6-b1c1-27ca8e2c901d\") " pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.818826 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4070239-a360-41b6-b1c1-27ca8e2c901d-operator-scripts\") pod \"nova-cell0-db-create-jwsh9\" (UID: \"d4070239-a360-41b6-b1c1-27ca8e2c901d\") " pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.819295 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439767c7-ee3b-4574-979b-9d59e1018a5e-operator-scripts\") pod \"nova-api-db-create-lkskx\" (UID: \"439767c7-ee3b-4574-979b-9d59e1018a5e\") " pod="nova-kuttl-default/nova-api-db-create-lkskx" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.819493 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnrqr\" (UniqueName: \"kubernetes.io/projected/439767c7-ee3b-4574-979b-9d59e1018a5e-kube-api-access-fnrqr\") pod \"nova-api-db-create-lkskx\" (UID: \"439767c7-ee3b-4574-979b-9d59e1018a5e\") " pod="nova-kuttl-default/nova-api-db-create-lkskx" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.820314 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439767c7-ee3b-4574-979b-9d59e1018a5e-operator-scripts\") pod \"nova-api-db-create-lkskx\" (UID: \"439767c7-ee3b-4574-979b-9d59e1018a5e\") " pod="nova-kuttl-default/nova-api-db-create-lkskx" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.858182 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnrqr\" (UniqueName: \"kubernetes.io/projected/439767c7-ee3b-4574-979b-9d59e1018a5e-kube-api-access-fnrqr\") pod \"nova-api-db-create-lkskx\" (UID: \"439767c7-ee3b-4574-979b-9d59e1018a5e\") " pod="nova-kuttl-default/nova-api-db-create-lkskx" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.871530 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-8sm77"] Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.873283 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-8sm77" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.879342 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx"] Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.886324 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.889780 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.890810 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-8sm77"] Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.945102 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fcf8457-d45f-44b4-9ec1-2635dfea5f76-operator-scripts\") pod \"nova-cell1-db-create-8sm77\" (UID: \"8fcf8457-d45f-44b4-9ec1-2635dfea5f76\") " pod="nova-kuttl-default/nova-cell1-db-create-8sm77" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.945172 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n6pt\" (UniqueName: \"kubernetes.io/projected/8fcf8457-d45f-44b4-9ec1-2635dfea5f76-kube-api-access-2n6pt\") pod \"nova-cell1-db-create-8sm77\" (UID: \"8fcf8457-d45f-44b4-9ec1-2635dfea5f76\") " pod="nova-kuttl-default/nova-cell1-db-create-8sm77" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.945250 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62730d4-0cfa-41ce-a3e5-7ea0f64739c0-operator-scripts\") pod \"nova-api-faf0-account-create-update-mnxdx\" (UID: \"e62730d4-0cfa-41ce-a3e5-7ea0f64739c0\") " pod="nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.945382 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrr9m\" (UniqueName: \"kubernetes.io/projected/d4070239-a360-41b6-b1c1-27ca8e2c901d-kube-api-access-hrr9m\") pod \"nova-cell0-db-create-jwsh9\" (UID: \"d4070239-a360-41b6-b1c1-27ca8e2c901d\") " pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.945444 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4070239-a360-41b6-b1c1-27ca8e2c901d-operator-scripts\") pod \"nova-cell0-db-create-jwsh9\" (UID: \"d4070239-a360-41b6-b1c1-27ca8e2c901d\") " pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.945544 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bggs\" (UniqueName: \"kubernetes.io/projected/e62730d4-0cfa-41ce-a3e5-7ea0f64739c0-kube-api-access-4bggs\") pod \"nova-api-faf0-account-create-update-mnxdx\" (UID: \"e62730d4-0cfa-41ce-a3e5-7ea0f64739c0\") " pod="nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.947098 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx"] Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.950726 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4070239-a360-41b6-b1c1-27ca8e2c901d-operator-scripts\") pod \"nova-cell0-db-create-jwsh9\" (UID: \"d4070239-a360-41b6-b1c1-27ca8e2c901d\") " pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" Jan 28 15:24:20 crc kubenswrapper[4893]: I0128 15:24:20.973761 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrr9m\" (UniqueName: \"kubernetes.io/projected/d4070239-a360-41b6-b1c1-27ca8e2c901d-kube-api-access-hrr9m\") pod \"nova-cell0-db-create-jwsh9\" (UID: \"d4070239-a360-41b6-b1c1-27ca8e2c901d\") " pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.008706 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-lkskx" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.051458 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bggs\" (UniqueName: \"kubernetes.io/projected/e62730d4-0cfa-41ce-a3e5-7ea0f64739c0-kube-api-access-4bggs\") pod \"nova-api-faf0-account-create-update-mnxdx\" (UID: \"e62730d4-0cfa-41ce-a3e5-7ea0f64739c0\") " pod="nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.051585 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fcf8457-d45f-44b4-9ec1-2635dfea5f76-operator-scripts\") pod \"nova-cell1-db-create-8sm77\" (UID: \"8fcf8457-d45f-44b4-9ec1-2635dfea5f76\") " pod="nova-kuttl-default/nova-cell1-db-create-8sm77" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.051605 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n6pt\" (UniqueName: \"kubernetes.io/projected/8fcf8457-d45f-44b4-9ec1-2635dfea5f76-kube-api-access-2n6pt\") pod \"nova-cell1-db-create-8sm77\" (UID: \"8fcf8457-d45f-44b4-9ec1-2635dfea5f76\") " pod="nova-kuttl-default/nova-cell1-db-create-8sm77" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.051634 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62730d4-0cfa-41ce-a3e5-7ea0f64739c0-operator-scripts\") pod \"nova-api-faf0-account-create-update-mnxdx\" (UID: \"e62730d4-0cfa-41ce-a3e5-7ea0f64739c0\") " pod="nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.052707 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fcf8457-d45f-44b4-9ec1-2635dfea5f76-operator-scripts\") pod \"nova-cell1-db-create-8sm77\" (UID: \"8fcf8457-d45f-44b4-9ec1-2635dfea5f76\") " pod="nova-kuttl-default/nova-cell1-db-create-8sm77" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.053114 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62730d4-0cfa-41ce-a3e5-7ea0f64739c0-operator-scripts\") pod \"nova-api-faf0-account-create-update-mnxdx\" (UID: \"e62730d4-0cfa-41ce-a3e5-7ea0f64739c0\") " pod="nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.070393 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.074270 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n6pt\" (UniqueName: \"kubernetes.io/projected/8fcf8457-d45f-44b4-9ec1-2635dfea5f76-kube-api-access-2n6pt\") pod \"nova-cell1-db-create-8sm77\" (UID: \"8fcf8457-d45f-44b4-9ec1-2635dfea5f76\") " pod="nova-kuttl-default/nova-cell1-db-create-8sm77" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.074766 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bggs\" (UniqueName: \"kubernetes.io/projected/e62730d4-0cfa-41ce-a3e5-7ea0f64739c0-kube-api-access-4bggs\") pod \"nova-api-faf0-account-create-update-mnxdx\" (UID: \"e62730d4-0cfa-41ce-a3e5-7ea0f64739c0\") " pod="nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.082143 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5"] Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.083866 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.086878 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.098141 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5"] Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.153092 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ab2220-9c97-48c4-8d5e-f53670f6f731-operator-scripts\") pod \"nova-cell0-c310-account-create-update-g6lj5\" (UID: \"34ab2220-9c97-48c4-8d5e-f53670f6f731\") " pod="nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.153145 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxh6f\" (UniqueName: \"kubernetes.io/projected/34ab2220-9c97-48c4-8d5e-f53670f6f731-kube-api-access-bxh6f\") pod \"nova-cell0-c310-account-create-update-g6lj5\" (UID: \"34ab2220-9c97-48c4-8d5e-f53670f6f731\") " pod="nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.208687 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-8sm77" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.249929 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.258107 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ab2220-9c97-48c4-8d5e-f53670f6f731-operator-scripts\") pod \"nova-cell0-c310-account-create-update-g6lj5\" (UID: \"34ab2220-9c97-48c4-8d5e-f53670f6f731\") " pod="nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.258177 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxh6f\" (UniqueName: \"kubernetes.io/projected/34ab2220-9c97-48c4-8d5e-f53670f6f731-kube-api-access-bxh6f\") pod \"nova-cell0-c310-account-create-update-g6lj5\" (UID: \"34ab2220-9c97-48c4-8d5e-f53670f6f731\") " pod="nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.259534 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ab2220-9c97-48c4-8d5e-f53670f6f731-operator-scripts\") pod \"nova-cell0-c310-account-create-update-g6lj5\" (UID: \"34ab2220-9c97-48c4-8d5e-f53670f6f731\") " pod="nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.270801 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck"] Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.272539 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.281049 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.293438 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxh6f\" (UniqueName: \"kubernetes.io/projected/34ab2220-9c97-48c4-8d5e-f53670f6f731-kube-api-access-bxh6f\") pod \"nova-cell0-c310-account-create-update-g6lj5\" (UID: \"34ab2220-9c97-48c4-8d5e-f53670f6f731\") " pod="nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.295206 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck"] Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.360772 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526041d2-fb34-40ee-b6d7-c45e3f38041f-operator-scripts\") pod \"nova-cell1-ce07-account-create-update-g4vck\" (UID: \"526041d2-fb34-40ee-b6d7-c45e3f38041f\") " pod="nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.360858 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69kbt\" (UniqueName: \"kubernetes.io/projected/526041d2-fb34-40ee-b6d7-c45e3f38041f-kube-api-access-69kbt\") pod \"nova-cell1-ce07-account-create-update-g4vck\" (UID: \"526041d2-fb34-40ee-b6d7-c45e3f38041f\") " pod="nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.440556 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.461927 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526041d2-fb34-40ee-b6d7-c45e3f38041f-operator-scripts\") pod \"nova-cell1-ce07-account-create-update-g4vck\" (UID: \"526041d2-fb34-40ee-b6d7-c45e3f38041f\") " pod="nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.462036 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69kbt\" (UniqueName: \"kubernetes.io/projected/526041d2-fb34-40ee-b6d7-c45e3f38041f-kube-api-access-69kbt\") pod \"nova-cell1-ce07-account-create-update-g4vck\" (UID: \"526041d2-fb34-40ee-b6d7-c45e3f38041f\") " pod="nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.463534 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526041d2-fb34-40ee-b6d7-c45e3f38041f-operator-scripts\") pod \"nova-cell1-ce07-account-create-update-g4vck\" (UID: \"526041d2-fb34-40ee-b6d7-c45e3f38041f\") " pod="nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.499149 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69kbt\" (UniqueName: \"kubernetes.io/projected/526041d2-fb34-40ee-b6d7-c45e3f38041f-kube-api-access-69kbt\") pod \"nova-cell1-ce07-account-create-update-g4vck\" (UID: \"526041d2-fb34-40ee-b6d7-c45e3f38041f\") " pod="nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.611872 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck" Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.619295 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-jwsh9"] Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.692875 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-lkskx"] Jan 28 15:24:21 crc kubenswrapper[4893]: W0128 15:24:21.760235 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod439767c7_ee3b_4574_979b_9d59e1018a5e.slice/crio-f8663ac10dbcc670ffbf4176cfe4ca6116dc46beb99ee6d1e0514011cfdf349c WatchSource:0}: Error finding container f8663ac10dbcc670ffbf4176cfe4ca6116dc46beb99ee6d1e0514011cfdf349c: Status 404 returned error can't find the container with id f8663ac10dbcc670ffbf4176cfe4ca6116dc46beb99ee6d1e0514011cfdf349c Jan 28 15:24:21 crc kubenswrapper[4893]: I0128 15:24:21.950894 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-8sm77"] Jan 28 15:24:22 crc kubenswrapper[4893]: I0128 15:24:22.008468 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx"] Jan 28 15:24:22 crc kubenswrapper[4893]: I0128 15:24:22.027569 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5"] Jan 28 15:24:22 crc kubenswrapper[4893]: I0128 15:24:22.326316 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5" event={"ID":"34ab2220-9c97-48c4-8d5e-f53670f6f731","Type":"ContainerStarted","Data":"fad4054371b9899cc9e184fda0505f18921191a40f793bfa8c896ee716e69f51"} Jan 28 15:24:22 crc kubenswrapper[4893]: I0128 15:24:22.329661 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx" event={"ID":"e62730d4-0cfa-41ce-a3e5-7ea0f64739c0","Type":"ContainerStarted","Data":"38312a31af09c06efc042b9da84dbdb079feef4f4f4de3f770123726f6cd8944"} Jan 28 15:24:22 crc kubenswrapper[4893]: I0128 15:24:22.331587 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" event={"ID":"d4070239-a360-41b6-b1c1-27ca8e2c901d","Type":"ContainerStarted","Data":"e2034b4c6cd41a77010538147a26d8aa68deff1ff7c96bc23905bd2b86fd6c85"} Jan 28 15:24:22 crc kubenswrapper[4893]: I0128 15:24:22.331636 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" event={"ID":"d4070239-a360-41b6-b1c1-27ca8e2c901d","Type":"ContainerStarted","Data":"6ba70d567137d524f4a7f4be19c33e9ee739e2a2c3a1ebbe57a8d6dbd6dfdc37"} Jan 28 15:24:22 crc kubenswrapper[4893]: I0128 15:24:22.335044 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-lkskx" event={"ID":"439767c7-ee3b-4574-979b-9d59e1018a5e","Type":"ContainerStarted","Data":"120c9e924930b9187c740f4ecd27062cd24aeebfc246671da3d9bf133203c2b5"} Jan 28 15:24:22 crc kubenswrapper[4893]: I0128 15:24:22.335082 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-lkskx" event={"ID":"439767c7-ee3b-4574-979b-9d59e1018a5e","Type":"ContainerStarted","Data":"f8663ac10dbcc670ffbf4176cfe4ca6116dc46beb99ee6d1e0514011cfdf349c"} Jan 28 15:24:22 crc kubenswrapper[4893]: I0128 15:24:22.338663 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-8sm77" event={"ID":"8fcf8457-d45f-44b4-9ec1-2635dfea5f76","Type":"ContainerStarted","Data":"74498b77bba1d94b0a19263b1a8fa9e29130ed2a540a3b9b6be1b5683d6d05d5"} Jan 28 15:24:22 crc kubenswrapper[4893]: I0128 15:24:22.338729 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-8sm77" event={"ID":"8fcf8457-d45f-44b4-9ec1-2635dfea5f76","Type":"ContainerStarted","Data":"69839589feab6d68b9c1e2bb54c6c3d87579d7d97f6065ff31872e4ce4ee9a95"} Jan 28 15:24:22 crc kubenswrapper[4893]: I0128 15:24:22.355022 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" podStartSLOduration=2.354998183 podStartE2EDuration="2.354998183s" podCreationTimestamp="2026-01-28 15:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:22.350910505 +0000 UTC m=+1380.124525533" watchObservedRunningTime="2026-01-28 15:24:22.354998183 +0000 UTC m=+1380.128613221" Jan 28 15:24:22 crc kubenswrapper[4893]: I0128 15:24:22.384690 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck"] Jan 28 15:24:22 crc kubenswrapper[4893]: I0128 15:24:22.397346 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell1-db-create-8sm77" podStartSLOduration=2.397316107 podStartE2EDuration="2.397316107s" podCreationTimestamp="2026-01-28 15:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:22.389255077 +0000 UTC m=+1380.162870115" watchObservedRunningTime="2026-01-28 15:24:22.397316107 +0000 UTC m=+1380.170931135" Jan 28 15:24:22 crc kubenswrapper[4893]: I0128 15:24:22.428839 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-db-create-lkskx" podStartSLOduration=2.428774818 podStartE2EDuration="2.428774818s" podCreationTimestamp="2026-01-28 15:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:22.42345466 +0000 UTC m=+1380.197069688" watchObservedRunningTime="2026-01-28 15:24:22.428774818 +0000 UTC m=+1380.202389846" Jan 28 15:24:23 crc kubenswrapper[4893]: I0128 15:24:23.358154 4893 generic.go:334] "Generic (PLEG): container finished" podID="e62730d4-0cfa-41ce-a3e5-7ea0f64739c0" containerID="a485ee8e12c459d8af7239e69f3391a6c16707d1bc205dff4ce305c4df08f5bc" exitCode=0 Jan 28 15:24:23 crc kubenswrapper[4893]: I0128 15:24:23.358283 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx" event={"ID":"e62730d4-0cfa-41ce-a3e5-7ea0f64739c0","Type":"ContainerDied","Data":"a485ee8e12c459d8af7239e69f3391a6c16707d1bc205dff4ce305c4df08f5bc"} Jan 28 15:24:23 crc kubenswrapper[4893]: I0128 15:24:23.365817 4893 generic.go:334] "Generic (PLEG): container finished" podID="d4070239-a360-41b6-b1c1-27ca8e2c901d" containerID="e2034b4c6cd41a77010538147a26d8aa68deff1ff7c96bc23905bd2b86fd6c85" exitCode=0 Jan 28 15:24:23 crc kubenswrapper[4893]: I0128 15:24:23.365899 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" event={"ID":"d4070239-a360-41b6-b1c1-27ca8e2c901d","Type":"ContainerDied","Data":"e2034b4c6cd41a77010538147a26d8aa68deff1ff7c96bc23905bd2b86fd6c85"} Jan 28 15:24:23 crc kubenswrapper[4893]: I0128 15:24:23.367731 4893 generic.go:334] "Generic (PLEG): container finished" podID="439767c7-ee3b-4574-979b-9d59e1018a5e" containerID="120c9e924930b9187c740f4ecd27062cd24aeebfc246671da3d9bf133203c2b5" exitCode=0 Jan 28 15:24:23 crc kubenswrapper[4893]: I0128 15:24:23.367781 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-lkskx" event={"ID":"439767c7-ee3b-4574-979b-9d59e1018a5e","Type":"ContainerDied","Data":"120c9e924930b9187c740f4ecd27062cd24aeebfc246671da3d9bf133203c2b5"} Jan 28 15:24:23 crc kubenswrapper[4893]: I0128 15:24:23.369190 4893 generic.go:334] "Generic (PLEG): container finished" podID="8fcf8457-d45f-44b4-9ec1-2635dfea5f76" containerID="74498b77bba1d94b0a19263b1a8fa9e29130ed2a540a3b9b6be1b5683d6d05d5" exitCode=0 Jan 28 15:24:23 crc kubenswrapper[4893]: I0128 15:24:23.369314 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-8sm77" event={"ID":"8fcf8457-d45f-44b4-9ec1-2635dfea5f76","Type":"ContainerDied","Data":"74498b77bba1d94b0a19263b1a8fa9e29130ed2a540a3b9b6be1b5683d6d05d5"} Jan 28 15:24:23 crc kubenswrapper[4893]: I0128 15:24:23.370898 4893 generic.go:334] "Generic (PLEG): container finished" podID="526041d2-fb34-40ee-b6d7-c45e3f38041f" containerID="c5b3d8589b4a8429e006febea6cba25c1c54675226a49dc44067e444ce1b9931" exitCode=0 Jan 28 15:24:23 crc kubenswrapper[4893]: I0128 15:24:23.370940 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck" event={"ID":"526041d2-fb34-40ee-b6d7-c45e3f38041f","Type":"ContainerDied","Data":"c5b3d8589b4a8429e006febea6cba25c1c54675226a49dc44067e444ce1b9931"} Jan 28 15:24:23 crc kubenswrapper[4893]: I0128 15:24:23.370958 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck" event={"ID":"526041d2-fb34-40ee-b6d7-c45e3f38041f","Type":"ContainerStarted","Data":"1da170fd3db3fbdbf1ff15efd6c82d4263c0f08dccbd194157d7450a53818731"} Jan 28 15:24:23 crc kubenswrapper[4893]: I0128 15:24:23.372502 4893 generic.go:334] "Generic (PLEG): container finished" podID="34ab2220-9c97-48c4-8d5e-f53670f6f731" containerID="e2ebfd55fda8709ff21ec3e56771802b492867dab184dea83ac0e1b77c818d91" exitCode=0 Jan 28 15:24:23 crc kubenswrapper[4893]: I0128 15:24:23.372533 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5" event={"ID":"34ab2220-9c97-48c4-8d5e-f53670f6f731","Type":"ContainerDied","Data":"e2ebfd55fda8709ff21ec3e56771802b492867dab184dea83ac0e1b77c818d91"} Jan 28 15:24:24 crc kubenswrapper[4893]: I0128 15:24:24.802137 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5" Jan 28 15:24:24 crc kubenswrapper[4893]: I0128 15:24:24.962141 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ab2220-9c97-48c4-8d5e-f53670f6f731-operator-scripts\") pod \"34ab2220-9c97-48c4-8d5e-f53670f6f731\" (UID: \"34ab2220-9c97-48c4-8d5e-f53670f6f731\") " Jan 28 15:24:24 crc kubenswrapper[4893]: I0128 15:24:24.962658 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxh6f\" (UniqueName: \"kubernetes.io/projected/34ab2220-9c97-48c4-8d5e-f53670f6f731-kube-api-access-bxh6f\") pod \"34ab2220-9c97-48c4-8d5e-f53670f6f731\" (UID: \"34ab2220-9c97-48c4-8d5e-f53670f6f731\") " Jan 28 15:24:24 crc kubenswrapper[4893]: I0128 15:24:24.964928 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ab2220-9c97-48c4-8d5e-f53670f6f731-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34ab2220-9c97-48c4-8d5e-f53670f6f731" (UID: "34ab2220-9c97-48c4-8d5e-f53670f6f731"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:24 crc kubenswrapper[4893]: I0128 15:24:24.973688 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ab2220-9c97-48c4-8d5e-f53670f6f731-kube-api-access-bxh6f" (OuterVolumeSpecName: "kube-api-access-bxh6f") pod "34ab2220-9c97-48c4-8d5e-f53670f6f731" (UID: "34ab2220-9c97-48c4-8d5e-f53670f6f731"). InnerVolumeSpecName "kube-api-access-bxh6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.056604 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-8sm77" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.067653 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.073805 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ab2220-9c97-48c4-8d5e-f53670f6f731-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.073851 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxh6f\" (UniqueName: \"kubernetes.io/projected/34ab2220-9c97-48c4-8d5e-f53670f6f731-kube-api-access-bxh6f\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.098947 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.119408 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.125571 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-lkskx" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.175466 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fcf8457-d45f-44b4-9ec1-2635dfea5f76-operator-scripts\") pod \"8fcf8457-d45f-44b4-9ec1-2635dfea5f76\" (UID: \"8fcf8457-d45f-44b4-9ec1-2635dfea5f76\") " Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.175550 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n6pt\" (UniqueName: \"kubernetes.io/projected/8fcf8457-d45f-44b4-9ec1-2635dfea5f76-kube-api-access-2n6pt\") pod \"8fcf8457-d45f-44b4-9ec1-2635dfea5f76\" (UID: \"8fcf8457-d45f-44b4-9ec1-2635dfea5f76\") " Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.175574 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4070239-a360-41b6-b1c1-27ca8e2c901d-operator-scripts\") pod \"d4070239-a360-41b6-b1c1-27ca8e2c901d\" (UID: \"d4070239-a360-41b6-b1c1-27ca8e2c901d\") " Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.175592 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrr9m\" (UniqueName: \"kubernetes.io/projected/d4070239-a360-41b6-b1c1-27ca8e2c901d-kube-api-access-hrr9m\") pod \"d4070239-a360-41b6-b1c1-27ca8e2c901d\" (UID: \"d4070239-a360-41b6-b1c1-27ca8e2c901d\") " Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.176657 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fcf8457-d45f-44b4-9ec1-2635dfea5f76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8fcf8457-d45f-44b4-9ec1-2635dfea5f76" (UID: "8fcf8457-d45f-44b4-9ec1-2635dfea5f76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.176684 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4070239-a360-41b6-b1c1-27ca8e2c901d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4070239-a360-41b6-b1c1-27ca8e2c901d" (UID: "d4070239-a360-41b6-b1c1-27ca8e2c901d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.178746 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4070239-a360-41b6-b1c1-27ca8e2c901d-kube-api-access-hrr9m" (OuterVolumeSpecName: "kube-api-access-hrr9m") pod "d4070239-a360-41b6-b1c1-27ca8e2c901d" (UID: "d4070239-a360-41b6-b1c1-27ca8e2c901d"). InnerVolumeSpecName "kube-api-access-hrr9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.179634 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fcf8457-d45f-44b4-9ec1-2635dfea5f76-kube-api-access-2n6pt" (OuterVolumeSpecName: "kube-api-access-2n6pt") pod "8fcf8457-d45f-44b4-9ec1-2635dfea5f76" (UID: "8fcf8457-d45f-44b4-9ec1-2635dfea5f76"). InnerVolumeSpecName "kube-api-access-2n6pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.277298 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnrqr\" (UniqueName: \"kubernetes.io/projected/439767c7-ee3b-4574-979b-9d59e1018a5e-kube-api-access-fnrqr\") pod \"439767c7-ee3b-4574-979b-9d59e1018a5e\" (UID: \"439767c7-ee3b-4574-979b-9d59e1018a5e\") " Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.277380 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439767c7-ee3b-4574-979b-9d59e1018a5e-operator-scripts\") pod \"439767c7-ee3b-4574-979b-9d59e1018a5e\" (UID: \"439767c7-ee3b-4574-979b-9d59e1018a5e\") " Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.277429 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526041d2-fb34-40ee-b6d7-c45e3f38041f-operator-scripts\") pod \"526041d2-fb34-40ee-b6d7-c45e3f38041f\" (UID: \"526041d2-fb34-40ee-b6d7-c45e3f38041f\") " Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.277456 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62730d4-0cfa-41ce-a3e5-7ea0f64739c0-operator-scripts\") pod \"e62730d4-0cfa-41ce-a3e5-7ea0f64739c0\" (UID: \"e62730d4-0cfa-41ce-a3e5-7ea0f64739c0\") " Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.277516 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bggs\" (UniqueName: \"kubernetes.io/projected/e62730d4-0cfa-41ce-a3e5-7ea0f64739c0-kube-api-access-4bggs\") pod \"e62730d4-0cfa-41ce-a3e5-7ea0f64739c0\" (UID: \"e62730d4-0cfa-41ce-a3e5-7ea0f64739c0\") " Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.277612 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69kbt\" (UniqueName: \"kubernetes.io/projected/526041d2-fb34-40ee-b6d7-c45e3f38041f-kube-api-access-69kbt\") pod \"526041d2-fb34-40ee-b6d7-c45e3f38041f\" (UID: \"526041d2-fb34-40ee-b6d7-c45e3f38041f\") " Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.277880 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fcf8457-d45f-44b4-9ec1-2635dfea5f76-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.277899 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n6pt\" (UniqueName: \"kubernetes.io/projected/8fcf8457-d45f-44b4-9ec1-2635dfea5f76-kube-api-access-2n6pt\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.277910 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4070239-a360-41b6-b1c1-27ca8e2c901d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.277920 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrr9m\" (UniqueName: \"kubernetes.io/projected/d4070239-a360-41b6-b1c1-27ca8e2c901d-kube-api-access-hrr9m\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.277798 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/439767c7-ee3b-4574-979b-9d59e1018a5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "439767c7-ee3b-4574-979b-9d59e1018a5e" (UID: "439767c7-ee3b-4574-979b-9d59e1018a5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.277996 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526041d2-fb34-40ee-b6d7-c45e3f38041f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "526041d2-fb34-40ee-b6d7-c45e3f38041f" (UID: "526041d2-fb34-40ee-b6d7-c45e3f38041f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.278419 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e62730d4-0cfa-41ce-a3e5-7ea0f64739c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e62730d4-0cfa-41ce-a3e5-7ea0f64739c0" (UID: "e62730d4-0cfa-41ce-a3e5-7ea0f64739c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.280364 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62730d4-0cfa-41ce-a3e5-7ea0f64739c0-kube-api-access-4bggs" (OuterVolumeSpecName: "kube-api-access-4bggs") pod "e62730d4-0cfa-41ce-a3e5-7ea0f64739c0" (UID: "e62730d4-0cfa-41ce-a3e5-7ea0f64739c0"). InnerVolumeSpecName "kube-api-access-4bggs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.280802 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/439767c7-ee3b-4574-979b-9d59e1018a5e-kube-api-access-fnrqr" (OuterVolumeSpecName: "kube-api-access-fnrqr") pod "439767c7-ee3b-4574-979b-9d59e1018a5e" (UID: "439767c7-ee3b-4574-979b-9d59e1018a5e"). InnerVolumeSpecName "kube-api-access-fnrqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.281052 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/526041d2-fb34-40ee-b6d7-c45e3f38041f-kube-api-access-69kbt" (OuterVolumeSpecName: "kube-api-access-69kbt") pod "526041d2-fb34-40ee-b6d7-c45e3f38041f" (UID: "526041d2-fb34-40ee-b6d7-c45e3f38041f"). InnerVolumeSpecName "kube-api-access-69kbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.379446 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69kbt\" (UniqueName: \"kubernetes.io/projected/526041d2-fb34-40ee-b6d7-c45e3f38041f-kube-api-access-69kbt\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.379515 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnrqr\" (UniqueName: \"kubernetes.io/projected/439767c7-ee3b-4574-979b-9d59e1018a5e-kube-api-access-fnrqr\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.379538 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/439767c7-ee3b-4574-979b-9d59e1018a5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.379551 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/526041d2-fb34-40ee-b6d7-c45e3f38041f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.379571 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62730d4-0cfa-41ce-a3e5-7ea0f64739c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.379587 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bggs\" (UniqueName: \"kubernetes.io/projected/e62730d4-0cfa-41ce-a3e5-7ea0f64739c0-kube-api-access-4bggs\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.391569 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.391552 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck" event={"ID":"526041d2-fb34-40ee-b6d7-c45e3f38041f","Type":"ContainerDied","Data":"1da170fd3db3fbdbf1ff15efd6c82d4263c0f08dccbd194157d7450a53818731"} Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.391724 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1da170fd3db3fbdbf1ff15efd6c82d4263c0f08dccbd194157d7450a53818731" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.393371 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.393395 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5" event={"ID":"34ab2220-9c97-48c4-8d5e-f53670f6f731","Type":"ContainerDied","Data":"fad4054371b9899cc9e184fda0505f18921191a40f793bfa8c896ee716e69f51"} Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.393439 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fad4054371b9899cc9e184fda0505f18921191a40f793bfa8c896ee716e69f51" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.394967 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx" event={"ID":"e62730d4-0cfa-41ce-a3e5-7ea0f64739c0","Type":"ContainerDied","Data":"38312a31af09c06efc042b9da84dbdb079feef4f4f4de3f770123726f6cd8944"} Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.395024 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38312a31af09c06efc042b9da84dbdb079feef4f4f4de3f770123726f6cd8944" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.395115 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.397086 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.397096 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-jwsh9" event={"ID":"d4070239-a360-41b6-b1c1-27ca8e2c901d","Type":"ContainerDied","Data":"6ba70d567137d524f4a7f4be19c33e9ee739e2a2c3a1ebbe57a8d6dbd6dfdc37"} Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.397125 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ba70d567137d524f4a7f4be19c33e9ee739e2a2c3a1ebbe57a8d6dbd6dfdc37" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.398953 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-lkskx" event={"ID":"439767c7-ee3b-4574-979b-9d59e1018a5e","Type":"ContainerDied","Data":"f8663ac10dbcc670ffbf4176cfe4ca6116dc46beb99ee6d1e0514011cfdf349c"} Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.399006 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8663ac10dbcc670ffbf4176cfe4ca6116dc46beb99ee6d1e0514011cfdf349c" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.399093 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-lkskx" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.401374 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-8sm77" event={"ID":"8fcf8457-d45f-44b4-9ec1-2635dfea5f76","Type":"ContainerDied","Data":"69839589feab6d68b9c1e2bb54c6c3d87579d7d97f6065ff31872e4ce4ee9a95"} Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.401428 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69839589feab6d68b9c1e2bb54c6c3d87579d7d97f6065ff31872e4ce4ee9a95" Jan 28 15:24:25 crc kubenswrapper[4893]: I0128 15:24:25.401526 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-8sm77" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.424865 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx"] Jan 28 15:24:26 crc kubenswrapper[4893]: E0128 15:24:26.425224 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ab2220-9c97-48c4-8d5e-f53670f6f731" containerName="mariadb-account-create-update" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.425319 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ab2220-9c97-48c4-8d5e-f53670f6f731" containerName="mariadb-account-create-update" Jan 28 15:24:26 crc kubenswrapper[4893]: E0128 15:24:26.425358 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4070239-a360-41b6-b1c1-27ca8e2c901d" containerName="mariadb-database-create" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.425365 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4070239-a360-41b6-b1c1-27ca8e2c901d" containerName="mariadb-database-create" Jan 28 15:24:26 crc kubenswrapper[4893]: E0128 15:24:26.425383 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62730d4-0cfa-41ce-a3e5-7ea0f64739c0" containerName="mariadb-account-create-update" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.425389 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62730d4-0cfa-41ce-a3e5-7ea0f64739c0" containerName="mariadb-account-create-update" Jan 28 15:24:26 crc kubenswrapper[4893]: E0128 15:24:26.425401 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439767c7-ee3b-4574-979b-9d59e1018a5e" containerName="mariadb-database-create" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.425407 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="439767c7-ee3b-4574-979b-9d59e1018a5e" containerName="mariadb-database-create" Jan 28 15:24:26 crc kubenswrapper[4893]: E0128 15:24:26.425418 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fcf8457-d45f-44b4-9ec1-2635dfea5f76" containerName="mariadb-database-create" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.425424 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fcf8457-d45f-44b4-9ec1-2635dfea5f76" containerName="mariadb-database-create" Jan 28 15:24:26 crc kubenswrapper[4893]: E0128 15:24:26.425436 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526041d2-fb34-40ee-b6d7-c45e3f38041f" containerName="mariadb-account-create-update" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.425442 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="526041d2-fb34-40ee-b6d7-c45e3f38041f" containerName="mariadb-account-create-update" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.425663 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4070239-a360-41b6-b1c1-27ca8e2c901d" containerName="mariadb-database-create" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.425675 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="526041d2-fb34-40ee-b6d7-c45e3f38041f" containerName="mariadb-account-create-update" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.425686 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ab2220-9c97-48c4-8d5e-f53670f6f731" containerName="mariadb-account-create-update" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.425695 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62730d4-0cfa-41ce-a3e5-7ea0f64739c0" containerName="mariadb-account-create-update" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.425704 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fcf8457-d45f-44b4-9ec1-2635dfea5f76" containerName="mariadb-database-create" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.425722 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="439767c7-ee3b-4574-979b-9d59e1018a5e" containerName="mariadb-database-create" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.426395 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.429718 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.429940 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.430116 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-drscq" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.433403 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx"] Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.598815 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwsc2\" (UniqueName: \"kubernetes.io/projected/f671187e-a6f6-47bc-8627-f324e5e1ff10-kube-api-access-qwsc2\") pod \"nova-kuttl-cell0-conductor-db-sync-82rfx\" (UID: \"f671187e-a6f6-47bc-8627-f324e5e1ff10\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.598905 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f671187e-a6f6-47bc-8627-f324e5e1ff10-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-82rfx\" (UID: \"f671187e-a6f6-47bc-8627-f324e5e1ff10\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.598964 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f671187e-a6f6-47bc-8627-f324e5e1ff10-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-82rfx\" (UID: \"f671187e-a6f6-47bc-8627-f324e5e1ff10\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.700715 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwsc2\" (UniqueName: \"kubernetes.io/projected/f671187e-a6f6-47bc-8627-f324e5e1ff10-kube-api-access-qwsc2\") pod \"nova-kuttl-cell0-conductor-db-sync-82rfx\" (UID: \"f671187e-a6f6-47bc-8627-f324e5e1ff10\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.701117 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f671187e-a6f6-47bc-8627-f324e5e1ff10-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-82rfx\" (UID: \"f671187e-a6f6-47bc-8627-f324e5e1ff10\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.701993 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f671187e-a6f6-47bc-8627-f324e5e1ff10-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-82rfx\" (UID: \"f671187e-a6f6-47bc-8627-f324e5e1ff10\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.704832 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f671187e-a6f6-47bc-8627-f324e5e1ff10-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-82rfx\" (UID: \"f671187e-a6f6-47bc-8627-f324e5e1ff10\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.705713 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f671187e-a6f6-47bc-8627-f324e5e1ff10-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-82rfx\" (UID: \"f671187e-a6f6-47bc-8627-f324e5e1ff10\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.718554 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwsc2\" (UniqueName: \"kubernetes.io/projected/f671187e-a6f6-47bc-8627-f324e5e1ff10-kube-api-access-qwsc2\") pod \"nova-kuttl-cell0-conductor-db-sync-82rfx\" (UID: \"f671187e-a6f6-47bc-8627-f324e5e1ff10\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" Jan 28 15:24:26 crc kubenswrapper[4893]: I0128 15:24:26.743504 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" Jan 28 15:24:27 crc kubenswrapper[4893]: I0128 15:24:27.189350 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx"] Jan 28 15:24:27 crc kubenswrapper[4893]: W0128 15:24:27.193527 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf671187e_a6f6_47bc_8627_f324e5e1ff10.slice/crio-5534642a520e0791d28f09b42ca8e8ddc92dcd7a8ca1ffc1809a791aa2528ec8 WatchSource:0}: Error finding container 5534642a520e0791d28f09b42ca8e8ddc92dcd7a8ca1ffc1809a791aa2528ec8: Status 404 returned error can't find the container with id 5534642a520e0791d28f09b42ca8e8ddc92dcd7a8ca1ffc1809a791aa2528ec8 Jan 28 15:24:27 crc kubenswrapper[4893]: I0128 15:24:27.417647 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" event={"ID":"f671187e-a6f6-47bc-8627-f324e5e1ff10","Type":"ContainerStarted","Data":"5534642a520e0791d28f09b42ca8e8ddc92dcd7a8ca1ffc1809a791aa2528ec8"} Jan 28 15:24:34 crc kubenswrapper[4893]: I0128 15:24:34.468374 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" event={"ID":"f671187e-a6f6-47bc-8627-f324e5e1ff10","Type":"ContainerStarted","Data":"0a7c41e2443e1f8abad80e7dea8f4b931d01e7d7b695254ae6c1ff943aa2f6a9"} Jan 28 15:24:35 crc kubenswrapper[4893]: I0128 15:24:35.722831 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:24:35 crc kubenswrapper[4893]: I0128 15:24:35.722897 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:24:58 crc kubenswrapper[4893]: I0128 15:24:58.744774 4893 generic.go:334] "Generic (PLEG): container finished" podID="f671187e-a6f6-47bc-8627-f324e5e1ff10" containerID="0a7c41e2443e1f8abad80e7dea8f4b931d01e7d7b695254ae6c1ff943aa2f6a9" exitCode=0 Jan 28 15:24:58 crc kubenswrapper[4893]: I0128 15:24:58.744862 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" event={"ID":"f671187e-a6f6-47bc-8627-f324e5e1ff10","Type":"ContainerDied","Data":"0a7c41e2443e1f8abad80e7dea8f4b931d01e7d7b695254ae6c1ff943aa2f6a9"} Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.216585 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.395366 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwsc2\" (UniqueName: \"kubernetes.io/projected/f671187e-a6f6-47bc-8627-f324e5e1ff10-kube-api-access-qwsc2\") pod \"f671187e-a6f6-47bc-8627-f324e5e1ff10\" (UID: \"f671187e-a6f6-47bc-8627-f324e5e1ff10\") " Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.395840 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f671187e-a6f6-47bc-8627-f324e5e1ff10-scripts\") pod \"f671187e-a6f6-47bc-8627-f324e5e1ff10\" (UID: \"f671187e-a6f6-47bc-8627-f324e5e1ff10\") " Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.396069 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f671187e-a6f6-47bc-8627-f324e5e1ff10-config-data\") pod \"f671187e-a6f6-47bc-8627-f324e5e1ff10\" (UID: \"f671187e-a6f6-47bc-8627-f324e5e1ff10\") " Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.417991 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f671187e-a6f6-47bc-8627-f324e5e1ff10-kube-api-access-qwsc2" (OuterVolumeSpecName: "kube-api-access-qwsc2") pod "f671187e-a6f6-47bc-8627-f324e5e1ff10" (UID: "f671187e-a6f6-47bc-8627-f324e5e1ff10"). InnerVolumeSpecName "kube-api-access-qwsc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.417993 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f671187e-a6f6-47bc-8627-f324e5e1ff10-scripts" (OuterVolumeSpecName: "scripts") pod "f671187e-a6f6-47bc-8627-f324e5e1ff10" (UID: "f671187e-a6f6-47bc-8627-f324e5e1ff10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.425938 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f671187e-a6f6-47bc-8627-f324e5e1ff10-config-data" (OuterVolumeSpecName: "config-data") pod "f671187e-a6f6-47bc-8627-f324e5e1ff10" (UID: "f671187e-a6f6-47bc-8627-f324e5e1ff10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.503994 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f671187e-a6f6-47bc-8627-f324e5e1ff10-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.504153 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwsc2\" (UniqueName: \"kubernetes.io/projected/f671187e-a6f6-47bc-8627-f324e5e1ff10-kube-api-access-qwsc2\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.504187 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f671187e-a6f6-47bc-8627-f324e5e1ff10-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.765097 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" event={"ID":"f671187e-a6f6-47bc-8627-f324e5e1ff10","Type":"ContainerDied","Data":"5534642a520e0791d28f09b42ca8e8ddc92dcd7a8ca1ffc1809a791aa2528ec8"} Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.765165 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5534642a520e0791d28f09b42ca8e8ddc92dcd7a8ca1ffc1809a791aa2528ec8" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.765240 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.878448 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:25:00 crc kubenswrapper[4893]: E0128 15:25:00.878924 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f671187e-a6f6-47bc-8627-f324e5e1ff10" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.878947 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f671187e-a6f6-47bc-8627-f324e5e1ff10" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.879144 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f671187e-a6f6-47bc-8627-f324e5e1ff10" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.879849 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.882723 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-drscq" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.884397 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.902909 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.912370 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671633bc-0311-475f-9e70-b101fa5257ad-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"671633bc-0311-475f-9e70-b101fa5257ad\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:25:00 crc kubenswrapper[4893]: I0128 15:25:00.912708 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bvq4\" (UniqueName: \"kubernetes.io/projected/671633bc-0311-475f-9e70-b101fa5257ad-kube-api-access-8bvq4\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"671633bc-0311-475f-9e70-b101fa5257ad\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:25:01 crc kubenswrapper[4893]: I0128 15:25:01.013850 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bvq4\" (UniqueName: \"kubernetes.io/projected/671633bc-0311-475f-9e70-b101fa5257ad-kube-api-access-8bvq4\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"671633bc-0311-475f-9e70-b101fa5257ad\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:25:01 crc kubenswrapper[4893]: I0128 15:25:01.014332 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671633bc-0311-475f-9e70-b101fa5257ad-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"671633bc-0311-475f-9e70-b101fa5257ad\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:25:01 crc kubenswrapper[4893]: I0128 15:25:01.018865 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671633bc-0311-475f-9e70-b101fa5257ad-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"671633bc-0311-475f-9e70-b101fa5257ad\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:25:01 crc kubenswrapper[4893]: I0128 15:25:01.036130 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bvq4\" (UniqueName: \"kubernetes.io/projected/671633bc-0311-475f-9e70-b101fa5257ad-kube-api-access-8bvq4\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"671633bc-0311-475f-9e70-b101fa5257ad\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:25:01 crc kubenswrapper[4893]: I0128 15:25:01.198714 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:25:01 crc kubenswrapper[4893]: I0128 15:25:01.641535 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:25:01 crc kubenswrapper[4893]: I0128 15:25:01.775289 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"671633bc-0311-475f-9e70-b101fa5257ad","Type":"ContainerStarted","Data":"c7a3de3beb0ae3845a50972296912a22ab94b9d7baa0f84336a4a659f24cb547"} Jan 28 15:25:02 crc kubenswrapper[4893]: I0128 15:25:02.786706 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"671633bc-0311-475f-9e70-b101fa5257ad","Type":"ContainerStarted","Data":"209d36d98c0be75b9759ce5a3e10e5042786d1fa0dfebc8671e432a3f65f3890"} Jan 28 15:25:02 crc kubenswrapper[4893]: I0128 15:25:02.787246 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:25:02 crc kubenswrapper[4893]: I0128 15:25:02.817948 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.817911591 podStartE2EDuration="2.817911591s" podCreationTimestamp="2026-01-28 15:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:02.805300791 +0000 UTC m=+1420.578915819" watchObservedRunningTime="2026-01-28 15:25:02.817911591 +0000 UTC m=+1420.591526639" Jan 28 15:25:05 crc kubenswrapper[4893]: I0128 15:25:05.722612 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:25:05 crc kubenswrapper[4893]: I0128 15:25:05.723903 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.226287 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.727284 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k"] Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.729100 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.734000 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.737727 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k"] Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.744541 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.829669 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-config-data\") pod \"nova-kuttl-cell0-cell-mapping-9fm7k\" (UID: \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.829769 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-scripts\") pod \"nova-kuttl-cell0-cell-mapping-9fm7k\" (UID: \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.829860 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvkgh\" (UniqueName: \"kubernetes.io/projected/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-kube-api-access-rvkgh\") pod \"nova-kuttl-cell0-cell-mapping-9fm7k\" (UID: \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.931145 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-config-data\") pod \"nova-kuttl-cell0-cell-mapping-9fm7k\" (UID: \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.931278 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-scripts\") pod \"nova-kuttl-cell0-cell-mapping-9fm7k\" (UID: \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.931395 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvkgh\" (UniqueName: \"kubernetes.io/projected/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-kube-api-access-rvkgh\") pod \"nova-kuttl-cell0-cell-mapping-9fm7k\" (UID: \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.951184 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-config-data\") pod \"nova-kuttl-cell0-cell-mapping-9fm7k\" (UID: \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.951677 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-scripts\") pod \"nova-kuttl-cell0-cell-mapping-9fm7k\" (UID: \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.955347 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvkgh\" (UniqueName: \"kubernetes.io/projected/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-kube-api-access-rvkgh\") pod \"nova-kuttl-cell0-cell-mapping-9fm7k\" (UID: \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.970016 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.971425 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.975888 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 15:25:06 crc kubenswrapper[4893]: I0128 15:25:06.993430 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.034393 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-config-data\") pod \"nova-kuttl-api-0\" (UID: \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.034528 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-logs\") pod \"nova-kuttl-api-0\" (UID: \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.034556 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2xxp\" (UniqueName: \"kubernetes.io/projected/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-kube-api-access-m2xxp\") pod \"nova-kuttl-api-0\" (UID: \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.050978 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.092868 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.095169 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.097965 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.118431 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.136050 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-config-data\") pod \"nova-kuttl-api-0\" (UID: \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.136132 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v4sq\" (UniqueName: \"kubernetes.io/projected/fc5e5f56-6f65-41e2-9d47-fe5a59541a00-kube-api-access-2v4sq\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"fc5e5f56-6f65-41e2-9d47-fe5a59541a00\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.136214 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc5e5f56-6f65-41e2-9d47-fe5a59541a00-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"fc5e5f56-6f65-41e2-9d47-fe5a59541a00\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.136400 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-logs\") pod \"nova-kuttl-api-0\" (UID: \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.136431 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2xxp\" (UniqueName: \"kubernetes.io/projected/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-kube-api-access-m2xxp\") pod \"nova-kuttl-api-0\" (UID: \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.142074 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-logs\") pod \"nova-kuttl-api-0\" (UID: \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.144424 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-config-data\") pod \"nova-kuttl-api-0\" (UID: \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.167234 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2xxp\" (UniqueName: \"kubernetes.io/projected/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-kube-api-access-m2xxp\") pod \"nova-kuttl-api-0\" (UID: \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.192778 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.194152 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.197944 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.223364 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.237717 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v4sq\" (UniqueName: \"kubernetes.io/projected/fc5e5f56-6f65-41e2-9d47-fe5a59541a00-kube-api-access-2v4sq\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"fc5e5f56-6f65-41e2-9d47-fe5a59541a00\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.237817 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511001de-6aaa-4d6c-8973-4c5a639936f8-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"511001de-6aaa-4d6c-8973-4c5a639936f8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.237860 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/511001de-6aaa-4d6c-8973-4c5a639936f8-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"511001de-6aaa-4d6c-8973-4c5a639936f8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.237927 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v45wh\" (UniqueName: \"kubernetes.io/projected/511001de-6aaa-4d6c-8973-4c5a639936f8-kube-api-access-v45wh\") pod \"nova-kuttl-metadata-0\" (UID: \"511001de-6aaa-4d6c-8973-4c5a639936f8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.237992 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc5e5f56-6f65-41e2-9d47-fe5a59541a00-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"fc5e5f56-6f65-41e2-9d47-fe5a59541a00\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.251114 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc5e5f56-6f65-41e2-9d47-fe5a59541a00-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"fc5e5f56-6f65-41e2-9d47-fe5a59541a00\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.258441 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v4sq\" (UniqueName: \"kubernetes.io/projected/fc5e5f56-6f65-41e2-9d47-fe5a59541a00-kube-api-access-2v4sq\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"fc5e5f56-6f65-41e2-9d47-fe5a59541a00\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.312622 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.315505 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.327134 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.341170 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511001de-6aaa-4d6c-8973-4c5a639936f8-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"511001de-6aaa-4d6c-8973-4c5a639936f8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.341233 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/511001de-6aaa-4d6c-8973-4c5a639936f8-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"511001de-6aaa-4d6c-8973-4c5a639936f8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.341297 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v45wh\" (UniqueName: \"kubernetes.io/projected/511001de-6aaa-4d6c-8973-4c5a639936f8-kube-api-access-v45wh\") pod \"nova-kuttl-metadata-0\" (UID: \"511001de-6aaa-4d6c-8973-4c5a639936f8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.342228 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/511001de-6aaa-4d6c-8973-4c5a639936f8-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"511001de-6aaa-4d6c-8973-4c5a639936f8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.345064 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.348693 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511001de-6aaa-4d6c-8973-4c5a639936f8-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"511001de-6aaa-4d6c-8973-4c5a639936f8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.355123 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.365767 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v45wh\" (UniqueName: \"kubernetes.io/projected/511001de-6aaa-4d6c-8973-4c5a639936f8-kube-api-access-v45wh\") pod \"nova-kuttl-metadata-0\" (UID: \"511001de-6aaa-4d6c-8973-4c5a639936f8\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.442839 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6b9e66-b32f-444c-bb2b-6842eb6c4650-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d6b9e66-b32f-444c-bb2b-6842eb6c4650\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.442896 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b45m\" (UniqueName: \"kubernetes.io/projected/3d6b9e66-b32f-444c-bb2b-6842eb6c4650-kube-api-access-4b45m\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d6b9e66-b32f-444c-bb2b-6842eb6c4650\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.536887 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.544443 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6b9e66-b32f-444c-bb2b-6842eb6c4650-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d6b9e66-b32f-444c-bb2b-6842eb6c4650\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.544517 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b45m\" (UniqueName: \"kubernetes.io/projected/3d6b9e66-b32f-444c-bb2b-6842eb6c4650-kube-api-access-4b45m\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d6b9e66-b32f-444c-bb2b-6842eb6c4650\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.552503 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6b9e66-b32f-444c-bb2b-6842eb6c4650-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d6b9e66-b32f-444c-bb2b-6842eb6c4650\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.552778 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.572405 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b45m\" (UniqueName: \"kubernetes.io/projected/3d6b9e66-b32f-444c-bb2b-6842eb6c4650-kube-api-access-4b45m\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d6b9e66-b32f-444c-bb2b-6842eb6c4650\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:07 crc kubenswrapper[4893]: I0128 15:25:07.649065 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:07.763104 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k"] Jan 28 15:25:08 crc kubenswrapper[4893]: W0128 15:25:07.780840 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccb6f9dc_4e92_4fb7_8ac7_fa95257dee6c.slice/crio-4c62b2e88d1618d4194ea439b1cb4f5640855bc31ead26dc3e44c785789a8d7d WatchSource:0}: Error finding container 4c62b2e88d1618d4194ea439b1cb4f5640855bc31ead26dc3e44c785789a8d7d: Status 404 returned error can't find the container with id 4c62b2e88d1618d4194ea439b1cb4f5640855bc31ead26dc3e44c785789a8d7d Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:07.832011 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" event={"ID":"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c","Type":"ContainerStarted","Data":"4c62b2e88d1618d4194ea439b1cb4f5640855bc31ead26dc3e44c785789a8d7d"} Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:07.880563 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll"] Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:07.884563 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:07.887076 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:07.887174 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:07.909852 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll"] Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:07.932506 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.054507 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638660ab-7425-4aec-bc6e-480defa16c71-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-29hll\" (UID: \"638660ab-7425-4aec-bc6e-480defa16c71\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.054923 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9qns\" (UniqueName: \"kubernetes.io/projected/638660ab-7425-4aec-bc6e-480defa16c71-kube-api-access-n9qns\") pod \"nova-kuttl-cell1-conductor-db-sync-29hll\" (UID: \"638660ab-7425-4aec-bc6e-480defa16c71\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.055013 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638660ab-7425-4aec-bc6e-480defa16c71-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-29hll\" (UID: \"638660ab-7425-4aec-bc6e-480defa16c71\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.157250 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638660ab-7425-4aec-bc6e-480defa16c71-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-29hll\" (UID: \"638660ab-7425-4aec-bc6e-480defa16c71\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.157295 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9qns\" (UniqueName: \"kubernetes.io/projected/638660ab-7425-4aec-bc6e-480defa16c71-kube-api-access-n9qns\") pod \"nova-kuttl-cell1-conductor-db-sync-29hll\" (UID: \"638660ab-7425-4aec-bc6e-480defa16c71\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.157347 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638660ab-7425-4aec-bc6e-480defa16c71-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-29hll\" (UID: \"638660ab-7425-4aec-bc6e-480defa16c71\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.164404 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638660ab-7425-4aec-bc6e-480defa16c71-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-29hll\" (UID: \"638660ab-7425-4aec-bc6e-480defa16c71\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.181743 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638660ab-7425-4aec-bc6e-480defa16c71-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-29hll\" (UID: \"638660ab-7425-4aec-bc6e-480defa16c71\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.183263 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9qns\" (UniqueName: \"kubernetes.io/projected/638660ab-7425-4aec-bc6e-480defa16c71-kube-api-access-n9qns\") pod \"nova-kuttl-cell1-conductor-db-sync-29hll\" (UID: \"638660ab-7425-4aec-bc6e-480defa16c71\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.209893 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.695550 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll"] Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.706950 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.728385 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.736255 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.847861 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3d6b9e66-b32f-444c-bb2b-6842eb6c4650","Type":"ContainerStarted","Data":"a9558becf6629637dc3cd768242806a3b303e156dcd3643d8a4ac2990a18ab5b"} Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.849035 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" event={"ID":"638660ab-7425-4aec-bc6e-480defa16c71","Type":"ContainerStarted","Data":"55714b50507713d7863e300c2691947c4cce0c60de4c2f8bba293fd69263d081"} Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.850722 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea","Type":"ContainerStarted","Data":"c8a53d9dd9e69cf169f724d2e671278d997a6e14ce0e963f26e79934879bd2a9"} Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.853710 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"fc5e5f56-6f65-41e2-9d47-fe5a59541a00","Type":"ContainerStarted","Data":"2dfb1af8c6f96982ddeb307352502bddc1f52bc5391d71fec41cd405fa5dac9e"} Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.855420 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" event={"ID":"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c","Type":"ContainerStarted","Data":"91ec5c55c303d83089e45524a04ff93cf9a04b0599232146a5e43f8051d87b28"} Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.856637 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"511001de-6aaa-4d6c-8973-4c5a639936f8","Type":"ContainerStarted","Data":"29ae408bb23241e0f8fc44931892638221078bdbd2ba1305848fd18171ae1197"} Jan 28 15:25:08 crc kubenswrapper[4893]: I0128 15:25:08.872058 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" podStartSLOduration=2.872022891 podStartE2EDuration="2.872022891s" podCreationTimestamp="2026-01-28 15:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:08.870541261 +0000 UTC m=+1426.644156289" watchObservedRunningTime="2026-01-28 15:25:08.872022891 +0000 UTC m=+1426.645637919" Jan 28 15:25:09 crc kubenswrapper[4893]: I0128 15:25:09.866338 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"511001de-6aaa-4d6c-8973-4c5a639936f8","Type":"ContainerStarted","Data":"0afc32558bb2941a379e9e6fb7f3873547aff0928a2d4248045a09dbf8e2167b"} Jan 28 15:25:09 crc kubenswrapper[4893]: I0128 15:25:09.871003 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" event={"ID":"638660ab-7425-4aec-bc6e-480defa16c71","Type":"ContainerStarted","Data":"95417485af8fffaf3fa67426d68174802c0a060eff2a78ee01c981b94c80b2bb"} Jan 28 15:25:09 crc kubenswrapper[4893]: I0128 15:25:09.876913 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea","Type":"ContainerStarted","Data":"109c40e3cec978a969866af85ef2297d1833afdb19a80bcd6d1e2523d75e0970"} Jan 28 15:25:09 crc kubenswrapper[4893]: I0128 15:25:09.899715 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" podStartSLOduration=2.899691428 podStartE2EDuration="2.899691428s" podCreationTimestamp="2026-01-28 15:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:09.898755593 +0000 UTC m=+1427.672370631" watchObservedRunningTime="2026-01-28 15:25:09.899691428 +0000 UTC m=+1427.673306456" Jan 28 15:25:10 crc kubenswrapper[4893]: I0128 15:25:10.913295 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"511001de-6aaa-4d6c-8973-4c5a639936f8","Type":"ContainerStarted","Data":"a53edf602bad82e5528ce1aa4ae76fd97d348c2551d696bab34a6c1ba54c6a24"} Jan 28 15:25:10 crc kubenswrapper[4893]: I0128 15:25:10.913612 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea","Type":"ContainerStarted","Data":"5968573a8ea9cae6a8f9e2445daef8ba846216cb30dfb4db12d0946d09413f00"} Jan 28 15:25:10 crc kubenswrapper[4893]: I0128 15:25:10.936143 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=3.107838132 podStartE2EDuration="3.936124343s" podCreationTimestamp="2026-01-28 15:25:07 +0000 UTC" firstStartedPulling="2026-01-28 15:25:08.729372762 +0000 UTC m=+1426.502987790" lastFinishedPulling="2026-01-28 15:25:09.557658973 +0000 UTC m=+1427.331274001" observedRunningTime="2026-01-28 15:25:10.924656315 +0000 UTC m=+1428.698271343" watchObservedRunningTime="2026-01-28 15:25:10.936124343 +0000 UTC m=+1428.709739371" Jan 28 15:25:10 crc kubenswrapper[4893]: I0128 15:25:10.957595 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=3.346426065 podStartE2EDuration="4.957577192s" podCreationTimestamp="2026-01-28 15:25:06 +0000 UTC" firstStartedPulling="2026-01-28 15:25:07.941106795 +0000 UTC m=+1425.714721823" lastFinishedPulling="2026-01-28 15:25:09.552257922 +0000 UTC m=+1427.325872950" observedRunningTime="2026-01-28 15:25:10.95017691 +0000 UTC m=+1428.723791938" watchObservedRunningTime="2026-01-28 15:25:10.957577192 +0000 UTC m=+1428.731192220" Jan 28 15:25:11 crc kubenswrapper[4893]: I0128 15:25:11.920201 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"fc5e5f56-6f65-41e2-9d47-fe5a59541a00","Type":"ContainerStarted","Data":"4f44fa1204db3503cb800782b765933bcaeca9ba851a533a9f3ee1e6defdf509"} Jan 28 15:25:11 crc kubenswrapper[4893]: I0128 15:25:11.924841 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3d6b9e66-b32f-444c-bb2b-6842eb6c4650","Type":"ContainerStarted","Data":"450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3"} Jan 28 15:25:11 crc kubenswrapper[4893]: I0128 15:25:11.948880 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.207471894 podStartE2EDuration="4.948856631s" podCreationTimestamp="2026-01-28 15:25:07 +0000 UTC" firstStartedPulling="2026-01-28 15:25:08.730305726 +0000 UTC m=+1426.503920754" lastFinishedPulling="2026-01-28 15:25:11.471690463 +0000 UTC m=+1429.245305491" observedRunningTime="2026-01-28 15:25:11.940213386 +0000 UTC m=+1429.713828444" watchObservedRunningTime="2026-01-28 15:25:11.948856631 +0000 UTC m=+1429.722471659" Jan 28 15:25:11 crc kubenswrapper[4893]: I0128 15:25:11.960309 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.193305665 podStartE2EDuration="4.960291859s" podCreationTimestamp="2026-01-28 15:25:07 +0000 UTC" firstStartedPulling="2026-01-28 15:25:08.707260206 +0000 UTC m=+1426.480875234" lastFinishedPulling="2026-01-28 15:25:11.4742464 +0000 UTC m=+1429.247861428" observedRunningTime="2026-01-28 15:25:11.957833925 +0000 UTC m=+1429.731448983" watchObservedRunningTime="2026-01-28 15:25:11.960291859 +0000 UTC m=+1429.733906887" Jan 28 15:25:12 crc kubenswrapper[4893]: I0128 15:25:12.537404 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:25:12 crc kubenswrapper[4893]: I0128 15:25:12.553647 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:12 crc kubenswrapper[4893]: I0128 15:25:12.553720 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:12 crc kubenswrapper[4893]: I0128 15:25:12.650073 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:16 crc kubenswrapper[4893]: I0128 15:25:16.970403 4893 generic.go:334] "Generic (PLEG): container finished" podID="ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c" containerID="91ec5c55c303d83089e45524a04ff93cf9a04b0599232146a5e43f8051d87b28" exitCode=0 Jan 28 15:25:16 crc kubenswrapper[4893]: I0128 15:25:16.970534 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" event={"ID":"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c","Type":"ContainerDied","Data":"91ec5c55c303d83089e45524a04ff93cf9a04b0599232146a5e43f8051d87b28"} Jan 28 15:25:16 crc kubenswrapper[4893]: I0128 15:25:16.978657 4893 generic.go:334] "Generic (PLEG): container finished" podID="638660ab-7425-4aec-bc6e-480defa16c71" containerID="95417485af8fffaf3fa67426d68174802c0a060eff2a78ee01c981b94c80b2bb" exitCode=0 Jan 28 15:25:16 crc kubenswrapper[4893]: I0128 15:25:16.978708 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" event={"ID":"638660ab-7425-4aec-bc6e-480defa16c71","Type":"ContainerDied","Data":"95417485af8fffaf3fa67426d68174802c0a060eff2a78ee01c981b94c80b2bb"} Jan 28 15:25:17 crc kubenswrapper[4893]: I0128 15:25:17.356217 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:17 crc kubenswrapper[4893]: I0128 15:25:17.356707 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:17 crc kubenswrapper[4893]: I0128 15:25:17.537275 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4893]: I0128 15:25:17.548609 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4893]: I0128 15:25:17.554194 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4893]: I0128 15:25:17.554413 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4893]: I0128 15:25:17.650299 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:17 crc kubenswrapper[4893]: I0128 15:25:17.674036 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.007544 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.035890 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.443695 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.124:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.444858 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.124:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.544170 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.551886 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.637088 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="511001de-6aaa-4d6c-8973-4c5a639936f8" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.126:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.637111 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="511001de-6aaa-4d6c-8973-4c5a639936f8" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.126:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.703864 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638660ab-7425-4aec-bc6e-480defa16c71-scripts\") pod \"638660ab-7425-4aec-bc6e-480defa16c71\" (UID: \"638660ab-7425-4aec-bc6e-480defa16c71\") " Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.703967 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9qns\" (UniqueName: \"kubernetes.io/projected/638660ab-7425-4aec-bc6e-480defa16c71-kube-api-access-n9qns\") pod \"638660ab-7425-4aec-bc6e-480defa16c71\" (UID: \"638660ab-7425-4aec-bc6e-480defa16c71\") " Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.704102 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvkgh\" (UniqueName: \"kubernetes.io/projected/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-kube-api-access-rvkgh\") pod \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\" (UID: \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\") " Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.704151 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-config-data\") pod \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\" (UID: \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\") " Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.704254 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-scripts\") pod \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\" (UID: \"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c\") " Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.704346 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638660ab-7425-4aec-bc6e-480defa16c71-config-data\") pod \"638660ab-7425-4aec-bc6e-480defa16c71\" (UID: \"638660ab-7425-4aec-bc6e-480defa16c71\") " Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.715729 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-kube-api-access-rvkgh" (OuterVolumeSpecName: "kube-api-access-rvkgh") pod "ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c" (UID: "ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c"). InnerVolumeSpecName "kube-api-access-rvkgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.727714 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638660ab-7425-4aec-bc6e-480defa16c71-scripts" (OuterVolumeSpecName: "scripts") pod "638660ab-7425-4aec-bc6e-480defa16c71" (UID: "638660ab-7425-4aec-bc6e-480defa16c71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.732698 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638660ab-7425-4aec-bc6e-480defa16c71-kube-api-access-n9qns" (OuterVolumeSpecName: "kube-api-access-n9qns") pod "638660ab-7425-4aec-bc6e-480defa16c71" (UID: "638660ab-7425-4aec-bc6e-480defa16c71"). InnerVolumeSpecName "kube-api-access-n9qns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.732905 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-scripts" (OuterVolumeSpecName: "scripts") pod "ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c" (UID: "ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.733994 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638660ab-7425-4aec-bc6e-480defa16c71-config-data" (OuterVolumeSpecName: "config-data") pod "638660ab-7425-4aec-bc6e-480defa16c71" (UID: "638660ab-7425-4aec-bc6e-480defa16c71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.737098 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-config-data" (OuterVolumeSpecName: "config-data") pod "ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c" (UID: "ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.806441 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvkgh\" (UniqueName: \"kubernetes.io/projected/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-kube-api-access-rvkgh\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.806493 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.806505 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.806515 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/638660ab-7425-4aec-bc6e-480defa16c71-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.806524 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/638660ab-7425-4aec-bc6e-480defa16c71-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:18 crc kubenswrapper[4893]: I0128 15:25:18.806534 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9qns\" (UniqueName: \"kubernetes.io/projected/638660ab-7425-4aec-bc6e-480defa16c71-kube-api-access-n9qns\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.001110 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" event={"ID":"638660ab-7425-4aec-bc6e-480defa16c71","Type":"ContainerDied","Data":"55714b50507713d7863e300c2691947c4cce0c60de4c2f8bba293fd69263d081"} Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.001148 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55714b50507713d7863e300c2691947c4cce0c60de4c2f8bba293fd69263d081" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.001203 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.004213 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" event={"ID":"ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c","Type":"ContainerDied","Data":"4c62b2e88d1618d4194ea439b1cb4f5640855bc31ead26dc3e44c785789a8d7d"} Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.004407 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c62b2e88d1618d4194ea439b1cb4f5640855bc31ead26dc3e44c785789a8d7d" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.004560 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.103325 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:25:19 crc kubenswrapper[4893]: E0128 15:25:19.104496 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c" containerName="nova-manage" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.104563 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c" containerName="nova-manage" Jan 28 15:25:19 crc kubenswrapper[4893]: E0128 15:25:19.104625 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638660ab-7425-4aec-bc6e-480defa16c71" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.104692 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="638660ab-7425-4aec-bc6e-480defa16c71" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.104908 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="638660ab-7425-4aec-bc6e-480defa16c71" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.104983 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c" containerName="nova-manage" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.105661 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.109270 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.121500 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.213543 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db584031-a14c-4916-a5de-767628445966-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"db584031-a14c-4916-a5de-767628445966\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.213716 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d775v\" (UniqueName: \"kubernetes.io/projected/db584031-a14c-4916-a5de-767628445966-kube-api-access-d775v\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"db584031-a14c-4916-a5de-767628445966\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.247987 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.248445 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" containerName="nova-kuttl-api-log" containerID="cri-o://109c40e3cec978a969866af85ef2297d1833afdb19a80bcd6d1e2523d75e0970" gracePeriod=30 Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.248694 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" containerName="nova-kuttl-api-api" containerID="cri-o://5968573a8ea9cae6a8f9e2445daef8ba846216cb30dfb4db12d0946d09413f00" gracePeriod=30 Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.315494 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d775v\" (UniqueName: \"kubernetes.io/projected/db584031-a14c-4916-a5de-767628445966-kube-api-access-d775v\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"db584031-a14c-4916-a5de-767628445966\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.315611 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db584031-a14c-4916-a5de-767628445966-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"db584031-a14c-4916-a5de-767628445966\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.320679 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db584031-a14c-4916-a5de-767628445966-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"db584031-a14c-4916-a5de-767628445966\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.327654 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.334948 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d775v\" (UniqueName: \"kubernetes.io/projected/db584031-a14c-4916-a5de-767628445966-kube-api-access-d775v\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"db584031-a14c-4916-a5de-767628445966\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.422001 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.446976 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.447303 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="511001de-6aaa-4d6c-8973-4c5a639936f8" containerName="nova-kuttl-metadata-log" containerID="cri-o://0afc32558bb2941a379e9e6fb7f3873547aff0928a2d4248045a09dbf8e2167b" gracePeriod=30 Jan 28 15:25:19 crc kubenswrapper[4893]: I0128 15:25:19.447399 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="511001de-6aaa-4d6c-8973-4c5a639936f8" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://a53edf602bad82e5528ce1aa4ae76fd97d348c2551d696bab34a6c1ba54c6a24" gracePeriod=30 Jan 28 15:25:20 crc kubenswrapper[4893]: I0128 15:25:20.014031 4893 generic.go:334] "Generic (PLEG): container finished" podID="2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" containerID="109c40e3cec978a969866af85ef2297d1833afdb19a80bcd6d1e2523d75e0970" exitCode=143 Jan 28 15:25:20 crc kubenswrapper[4893]: I0128 15:25:20.014404 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea","Type":"ContainerDied","Data":"109c40e3cec978a969866af85ef2297d1833afdb19a80bcd6d1e2523d75e0970"} Jan 28 15:25:20 crc kubenswrapper[4893]: I0128 15:25:20.015901 4893 generic.go:334] "Generic (PLEG): container finished" podID="511001de-6aaa-4d6c-8973-4c5a639936f8" containerID="0afc32558bb2941a379e9e6fb7f3873547aff0928a2d4248045a09dbf8e2167b" exitCode=143 Jan 28 15:25:20 crc kubenswrapper[4893]: I0128 15:25:20.016049 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="3d6b9e66-b32f-444c-bb2b-6842eb6c4650" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3" gracePeriod=30 Jan 28 15:25:20 crc kubenswrapper[4893]: I0128 15:25:20.016693 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"511001de-6aaa-4d6c-8973-4c5a639936f8","Type":"ContainerDied","Data":"0afc32558bb2941a379e9e6fb7f3873547aff0928a2d4248045a09dbf8e2167b"} Jan 28 15:25:20 crc kubenswrapper[4893]: I0128 15:25:20.266997 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:25:21 crc kubenswrapper[4893]: I0128 15:25:21.030955 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"db584031-a14c-4916-a5de-767628445966","Type":"ContainerStarted","Data":"3b74684b7ff31dd2de1020b80d60cadc54168506dc25324121a4b86128900c97"} Jan 28 15:25:21 crc kubenswrapper[4893]: I0128 15:25:21.031295 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"db584031-a14c-4916-a5de-767628445966","Type":"ContainerStarted","Data":"dbc312e37f41ae658b018a1b398c10a6b5e4012c86cbd536d3fce69ec4133461"} Jan 28 15:25:21 crc kubenswrapper[4893]: I0128 15:25:21.031945 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:25:21 crc kubenswrapper[4893]: I0128 15:25:21.051029 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.051008286 podStartE2EDuration="2.051008286s" podCreationTimestamp="2026-01-28 15:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:21.049809205 +0000 UTC m=+1438.823424233" watchObservedRunningTime="2026-01-28 15:25:21.051008286 +0000 UTC m=+1438.824623314" Jan 28 15:25:22 crc kubenswrapper[4893]: E0128 15:25:22.652947 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:25:22 crc kubenswrapper[4893]: E0128 15:25:22.655442 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:25:22 crc kubenswrapper[4893]: E0128 15:25:22.659656 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:25:22 crc kubenswrapper[4893]: E0128 15:25:22.659748 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="3d6b9e66-b32f-444c-bb2b-6842eb6c4650" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:25:23 crc kubenswrapper[4893]: I0128 15:25:23.619780 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:23 crc kubenswrapper[4893]: I0128 15:25:23.746665 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b45m\" (UniqueName: \"kubernetes.io/projected/3d6b9e66-b32f-444c-bb2b-6842eb6c4650-kube-api-access-4b45m\") pod \"3d6b9e66-b32f-444c-bb2b-6842eb6c4650\" (UID: \"3d6b9e66-b32f-444c-bb2b-6842eb6c4650\") " Jan 28 15:25:23 crc kubenswrapper[4893]: I0128 15:25:23.747092 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6b9e66-b32f-444c-bb2b-6842eb6c4650-config-data\") pod \"3d6b9e66-b32f-444c-bb2b-6842eb6c4650\" (UID: \"3d6b9e66-b32f-444c-bb2b-6842eb6c4650\") " Jan 28 15:25:23 crc kubenswrapper[4893]: I0128 15:25:23.752713 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6b9e66-b32f-444c-bb2b-6842eb6c4650-kube-api-access-4b45m" (OuterVolumeSpecName: "kube-api-access-4b45m") pod "3d6b9e66-b32f-444c-bb2b-6842eb6c4650" (UID: "3d6b9e66-b32f-444c-bb2b-6842eb6c4650"). InnerVolumeSpecName "kube-api-access-4b45m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:23 crc kubenswrapper[4893]: I0128 15:25:23.772458 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6b9e66-b32f-444c-bb2b-6842eb6c4650-config-data" (OuterVolumeSpecName: "config-data") pod "3d6b9e66-b32f-444c-bb2b-6842eb6c4650" (UID: "3d6b9e66-b32f-444c-bb2b-6842eb6c4650"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:23 crc kubenswrapper[4893]: I0128 15:25:23.848808 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b45m\" (UniqueName: \"kubernetes.io/projected/3d6b9e66-b32f-444c-bb2b-6842eb6c4650-kube-api-access-4b45m\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:23 crc kubenswrapper[4893]: I0128 15:25:23.848851 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d6b9e66-b32f-444c-bb2b-6842eb6c4650-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:23.999988 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.051635 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2xxp\" (UniqueName: \"kubernetes.io/projected/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-kube-api-access-m2xxp\") pod \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\" (UID: \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\") " Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.051743 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-logs\") pod \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\" (UID: \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\") " Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.051942 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-config-data\") pod \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\" (UID: \"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea\") " Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.052261 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-logs" (OuterVolumeSpecName: "logs") pod "2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" (UID: "2caa78a1-4a88-4f1f-bfa7-7249f21d7aea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.052369 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.055061 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-kube-api-access-m2xxp" (OuterVolumeSpecName: "kube-api-access-m2xxp") pod "2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" (UID: "2caa78a1-4a88-4f1f-bfa7-7249f21d7aea"). InnerVolumeSpecName "kube-api-access-m2xxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.059125 4893 generic.go:334] "Generic (PLEG): container finished" podID="3d6b9e66-b32f-444c-bb2b-6842eb6c4650" containerID="450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3" exitCode=0 Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.059158 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3d6b9e66-b32f-444c-bb2b-6842eb6c4650","Type":"ContainerDied","Data":"450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3"} Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.059195 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3d6b9e66-b32f-444c-bb2b-6842eb6c4650","Type":"ContainerDied","Data":"a9558becf6629637dc3cd768242806a3b303e156dcd3643d8a4ac2990a18ab5b"} Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.059214 4893 scope.go:117] "RemoveContainer" containerID="450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.059253 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.067118 4893 generic.go:334] "Generic (PLEG): container finished" podID="2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" containerID="5968573a8ea9cae6a8f9e2445daef8ba846216cb30dfb4db12d0946d09413f00" exitCode=0 Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.067216 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea","Type":"ContainerDied","Data":"5968573a8ea9cae6a8f9e2445daef8ba846216cb30dfb4db12d0946d09413f00"} Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.067255 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"2caa78a1-4a88-4f1f-bfa7-7249f21d7aea","Type":"ContainerDied","Data":"c8a53d9dd9e69cf169f724d2e671278d997a6e14ce0e963f26e79934879bd2a9"} Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.067217 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.071900 4893 generic.go:334] "Generic (PLEG): container finished" podID="511001de-6aaa-4d6c-8973-4c5a639936f8" containerID="a53edf602bad82e5528ce1aa4ae76fd97d348c2551d696bab34a6c1ba54c6a24" exitCode=0 Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.071943 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"511001de-6aaa-4d6c-8973-4c5a639936f8","Type":"ContainerDied","Data":"a53edf602bad82e5528ce1aa4ae76fd97d348c2551d696bab34a6c1ba54c6a24"} Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.076560 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-config-data" (OuterVolumeSpecName: "config-data") pod "2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" (UID: "2caa78a1-4a88-4f1f-bfa7-7249f21d7aea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.093256 4893 scope.go:117] "RemoveContainer" containerID="450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3" Jan 28 15:25:24 crc kubenswrapper[4893]: E0128 15:25:24.097162 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3\": container with ID starting with 450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3 not found: ID does not exist" containerID="450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.097218 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3"} err="failed to get container status \"450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3\": rpc error: code = NotFound desc = could not find container \"450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3\": container with ID starting with 450e2a9a31464c5350e71b24406955131bc20a7a563dbc6cb4ede364b7446dd3 not found: ID does not exist" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.097253 4893 scope.go:117] "RemoveContainer" containerID="5968573a8ea9cae6a8f9e2445daef8ba846216cb30dfb4db12d0946d09413f00" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.108916 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.115320 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.127619 4893 scope.go:117] "RemoveContainer" containerID="109c40e3cec978a969866af85ef2297d1833afdb19a80bcd6d1e2523d75e0970" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.127703 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:24 crc kubenswrapper[4893]: E0128 15:25:24.128665 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6b9e66-b32f-444c-bb2b-6842eb6c4650" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.128684 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6b9e66-b32f-444c-bb2b-6842eb6c4650" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:25:24 crc kubenswrapper[4893]: E0128 15:25:24.128703 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" containerName="nova-kuttl-api-log" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.128711 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" containerName="nova-kuttl-api-log" Jan 28 15:25:24 crc kubenswrapper[4893]: E0128 15:25:24.128773 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" containerName="nova-kuttl-api-api" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.128781 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" containerName="nova-kuttl-api-api" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.129028 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" containerName="nova-kuttl-api-log" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.129055 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" containerName="nova-kuttl-api-api" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.129066 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6b9e66-b32f-444c-bb2b-6842eb6c4650" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.129835 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.132166 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.153303 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q6m4\" (UniqueName: \"kubernetes.io/projected/02942b90-ffb9-4923-9081-dad14f2c1b5a-kube-api-access-7q6m4\") pod \"nova-kuttl-scheduler-0\" (UID: \"02942b90-ffb9-4923-9081-dad14f2c1b5a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.153378 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02942b90-ffb9-4923-9081-dad14f2c1b5a-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"02942b90-ffb9-4923-9081-dad14f2c1b5a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.153532 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.153550 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2xxp\" (UniqueName: \"kubernetes.io/projected/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea-kube-api-access-m2xxp\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.156119 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.160583 4893 scope.go:117] "RemoveContainer" containerID="5968573a8ea9cae6a8f9e2445daef8ba846216cb30dfb4db12d0946d09413f00" Jan 28 15:25:24 crc kubenswrapper[4893]: E0128 15:25:24.161136 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5968573a8ea9cae6a8f9e2445daef8ba846216cb30dfb4db12d0946d09413f00\": container with ID starting with 5968573a8ea9cae6a8f9e2445daef8ba846216cb30dfb4db12d0946d09413f00 not found: ID does not exist" containerID="5968573a8ea9cae6a8f9e2445daef8ba846216cb30dfb4db12d0946d09413f00" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.161193 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5968573a8ea9cae6a8f9e2445daef8ba846216cb30dfb4db12d0946d09413f00"} err="failed to get container status \"5968573a8ea9cae6a8f9e2445daef8ba846216cb30dfb4db12d0946d09413f00\": rpc error: code = NotFound desc = could not find container \"5968573a8ea9cae6a8f9e2445daef8ba846216cb30dfb4db12d0946d09413f00\": container with ID starting with 5968573a8ea9cae6a8f9e2445daef8ba846216cb30dfb4db12d0946d09413f00 not found: ID does not exist" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.161235 4893 scope.go:117] "RemoveContainer" containerID="109c40e3cec978a969866af85ef2297d1833afdb19a80bcd6d1e2523d75e0970" Jan 28 15:25:24 crc kubenswrapper[4893]: E0128 15:25:24.161842 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"109c40e3cec978a969866af85ef2297d1833afdb19a80bcd6d1e2523d75e0970\": container with ID starting with 109c40e3cec978a969866af85ef2297d1833afdb19a80bcd6d1e2523d75e0970 not found: ID does not exist" containerID="109c40e3cec978a969866af85ef2297d1833afdb19a80bcd6d1e2523d75e0970" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.161884 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"109c40e3cec978a969866af85ef2297d1833afdb19a80bcd6d1e2523d75e0970"} err="failed to get container status \"109c40e3cec978a969866af85ef2297d1833afdb19a80bcd6d1e2523d75e0970\": rpc error: code = NotFound desc = could not find container \"109c40e3cec978a969866af85ef2297d1833afdb19a80bcd6d1e2523d75e0970\": container with ID starting with 109c40e3cec978a969866af85ef2297d1833afdb19a80bcd6d1e2523d75e0970 not found: ID does not exist" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.255196 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q6m4\" (UniqueName: \"kubernetes.io/projected/02942b90-ffb9-4923-9081-dad14f2c1b5a-kube-api-access-7q6m4\") pod \"nova-kuttl-scheduler-0\" (UID: \"02942b90-ffb9-4923-9081-dad14f2c1b5a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.255263 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02942b90-ffb9-4923-9081-dad14f2c1b5a-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"02942b90-ffb9-4923-9081-dad14f2c1b5a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.258916 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02942b90-ffb9-4923-9081-dad14f2c1b5a-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"02942b90-ffb9-4923-9081-dad14f2c1b5a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.274309 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q6m4\" (UniqueName: \"kubernetes.io/projected/02942b90-ffb9-4923-9081-dad14f2c1b5a-kube-api-access-7q6m4\") pod \"nova-kuttl-scheduler-0\" (UID: \"02942b90-ffb9-4923-9081-dad14f2c1b5a\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.416983 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.450237 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.458166 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.460693 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.462788 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.466133 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.482096 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.561571 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n866d\" (UniqueName: \"kubernetes.io/projected/b6a352f2-cd25-4db6-a176-3b588b69090b-kube-api-access-n866d\") pod \"nova-kuttl-api-0\" (UID: \"b6a352f2-cd25-4db6-a176-3b588b69090b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.561677 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6a352f2-cd25-4db6-a176-3b588b69090b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"b6a352f2-cd25-4db6-a176-3b588b69090b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.561720 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6a352f2-cd25-4db6-a176-3b588b69090b-logs\") pod \"nova-kuttl-api-0\" (UID: \"b6a352f2-cd25-4db6-a176-3b588b69090b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.662861 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6a352f2-cd25-4db6-a176-3b588b69090b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"b6a352f2-cd25-4db6-a176-3b588b69090b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.662919 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6a352f2-cd25-4db6-a176-3b588b69090b-logs\") pod \"nova-kuttl-api-0\" (UID: \"b6a352f2-cd25-4db6-a176-3b588b69090b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.662974 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n866d\" (UniqueName: \"kubernetes.io/projected/b6a352f2-cd25-4db6-a176-3b588b69090b-kube-api-access-n866d\") pod \"nova-kuttl-api-0\" (UID: \"b6a352f2-cd25-4db6-a176-3b588b69090b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.663847 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6a352f2-cd25-4db6-a176-3b588b69090b-logs\") pod \"nova-kuttl-api-0\" (UID: \"b6a352f2-cd25-4db6-a176-3b588b69090b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.681022 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6a352f2-cd25-4db6-a176-3b588b69090b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"b6a352f2-cd25-4db6-a176-3b588b69090b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.689269 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n866d\" (UniqueName: \"kubernetes.io/projected/b6a352f2-cd25-4db6-a176-3b588b69090b-kube-api-access-n866d\") pod \"nova-kuttl-api-0\" (UID: \"b6a352f2-cd25-4db6-a176-3b588b69090b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.737096 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.764206 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v45wh\" (UniqueName: \"kubernetes.io/projected/511001de-6aaa-4d6c-8973-4c5a639936f8-kube-api-access-v45wh\") pod \"511001de-6aaa-4d6c-8973-4c5a639936f8\" (UID: \"511001de-6aaa-4d6c-8973-4c5a639936f8\") " Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.764535 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/511001de-6aaa-4d6c-8973-4c5a639936f8-logs\") pod \"511001de-6aaa-4d6c-8973-4c5a639936f8\" (UID: \"511001de-6aaa-4d6c-8973-4c5a639936f8\") " Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.764578 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511001de-6aaa-4d6c-8973-4c5a639936f8-config-data\") pod \"511001de-6aaa-4d6c-8973-4c5a639936f8\" (UID: \"511001de-6aaa-4d6c-8973-4c5a639936f8\") " Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.766934 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/511001de-6aaa-4d6c-8973-4c5a639936f8-logs" (OuterVolumeSpecName: "logs") pod "511001de-6aaa-4d6c-8973-4c5a639936f8" (UID: "511001de-6aaa-4d6c-8973-4c5a639936f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.769646 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/511001de-6aaa-4d6c-8973-4c5a639936f8-kube-api-access-v45wh" (OuterVolumeSpecName: "kube-api-access-v45wh") pod "511001de-6aaa-4d6c-8973-4c5a639936f8" (UID: "511001de-6aaa-4d6c-8973-4c5a639936f8"). InnerVolumeSpecName "kube-api-access-v45wh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.786263 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/511001de-6aaa-4d6c-8973-4c5a639936f8-config-data" (OuterVolumeSpecName: "config-data") pod "511001de-6aaa-4d6c-8973-4c5a639936f8" (UID: "511001de-6aaa-4d6c-8973-4c5a639936f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.866722 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v45wh\" (UniqueName: \"kubernetes.io/projected/511001de-6aaa-4d6c-8973-4c5a639936f8-kube-api-access-v45wh\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.866794 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/511001de-6aaa-4d6c-8973-4c5a639936f8-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.866810 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/511001de-6aaa-4d6c-8973-4c5a639936f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.870625 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.904074 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2caa78a1-4a88-4f1f-bfa7-7249f21d7aea" path="/var/lib/kubelet/pods/2caa78a1-4a88-4f1f-bfa7-7249f21d7aea/volumes" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.905053 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6b9e66-b32f-444c-bb2b-6842eb6c4650" path="/var/lib/kubelet/pods/3d6b9e66-b32f-444c-bb2b-6842eb6c4650/volumes" Jan 28 15:25:24 crc kubenswrapper[4893]: I0128 15:25:24.936514 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:24 crc kubenswrapper[4893]: W0128 15:25:24.944873 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02942b90_ffb9_4923_9081_dad14f2c1b5a.slice/crio-9573e22dd638568a10272da6f812b9180de5e881b40ff35c612f7952a313e559 WatchSource:0}: Error finding container 9573e22dd638568a10272da6f812b9180de5e881b40ff35c612f7952a313e559: Status 404 returned error can't find the container with id 9573e22dd638568a10272da6f812b9180de5e881b40ff35c612f7952a313e559 Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.087542 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"02942b90-ffb9-4923-9081-dad14f2c1b5a","Type":"ContainerStarted","Data":"9573e22dd638568a10272da6f812b9180de5e881b40ff35c612f7952a313e559"} Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.091585 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"511001de-6aaa-4d6c-8973-4c5a639936f8","Type":"ContainerDied","Data":"29ae408bb23241e0f8fc44931892638221078bdbd2ba1305848fd18171ae1197"} Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.091643 4893 scope.go:117] "RemoveContainer" containerID="a53edf602bad82e5528ce1aa4ae76fd97d348c2551d696bab34a6c1ba54c6a24" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.091654 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.122216 4893 scope.go:117] "RemoveContainer" containerID="0afc32558bb2941a379e9e6fb7f3873547aff0928a2d4248045a09dbf8e2167b" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.122645 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.132557 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.151824 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:25 crc kubenswrapper[4893]: E0128 15:25:25.152360 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511001de-6aaa-4d6c-8973-4c5a639936f8" containerName="nova-kuttl-metadata-metadata" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.152384 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="511001de-6aaa-4d6c-8973-4c5a639936f8" containerName="nova-kuttl-metadata-metadata" Jan 28 15:25:25 crc kubenswrapper[4893]: E0128 15:25:25.152404 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="511001de-6aaa-4d6c-8973-4c5a639936f8" containerName="nova-kuttl-metadata-log" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.152412 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="511001de-6aaa-4d6c-8973-4c5a639936f8" containerName="nova-kuttl-metadata-log" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.152659 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="511001de-6aaa-4d6c-8973-4c5a639936f8" containerName="nova-kuttl-metadata-metadata" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.152684 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="511001de-6aaa-4d6c-8973-4c5a639936f8" containerName="nova-kuttl-metadata-log" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.153670 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.156074 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.171078 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.277416 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.277507 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.277550 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmsln\" (UniqueName: \"kubernetes.io/projected/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-kube-api-access-lmsln\") pod \"nova-kuttl-metadata-0\" (UID: \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.303738 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:25 crc kubenswrapper[4893]: W0128 15:25:25.306614 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6a352f2_cd25_4db6_a176_3b588b69090b.slice/crio-04fdc85aef5bb6cb1e4f01900b69140cd2a8da1fe379c4852c8d2389c1e4a780 WatchSource:0}: Error finding container 04fdc85aef5bb6cb1e4f01900b69140cd2a8da1fe379c4852c8d2389c1e4a780: Status 404 returned error can't find the container with id 04fdc85aef5bb6cb1e4f01900b69140cd2a8da1fe379c4852c8d2389c1e4a780 Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.379342 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.379416 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.379470 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmsln\" (UniqueName: \"kubernetes.io/projected/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-kube-api-access-lmsln\") pod \"nova-kuttl-metadata-0\" (UID: \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.379960 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.390212 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.398244 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmsln\" (UniqueName: \"kubernetes.io/projected/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-kube-api-access-lmsln\") pod \"nova-kuttl-metadata-0\" (UID: \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:25 crc kubenswrapper[4893]: I0128 15:25:25.476724 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:26 crc kubenswrapper[4893]: I0128 15:25:26.011796 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:26 crc kubenswrapper[4893]: I0128 15:25:26.104378 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"b6a352f2-cd25-4db6-a176-3b588b69090b","Type":"ContainerStarted","Data":"dd3369ccb4f382e630cedf3ce9a3c8c0a4f25edaff1f127682ddd34489177fa6"} Jan 28 15:25:26 crc kubenswrapper[4893]: I0128 15:25:26.104431 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"b6a352f2-cd25-4db6-a176-3b588b69090b","Type":"ContainerStarted","Data":"39b15ec90e0dc314d096277a31cdad18469b55883ef43179688aa28a17f3a61a"} Jan 28 15:25:26 crc kubenswrapper[4893]: I0128 15:25:26.104443 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"b6a352f2-cd25-4db6-a176-3b588b69090b","Type":"ContainerStarted","Data":"04fdc85aef5bb6cb1e4f01900b69140cd2a8da1fe379c4852c8d2389c1e4a780"} Jan 28 15:25:26 crc kubenswrapper[4893]: I0128 15:25:26.105884 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"02942b90-ffb9-4923-9081-dad14f2c1b5a","Type":"ContainerStarted","Data":"6f99f01c5cfb076c514bf061e7003db4965f1fa45cc52737bca7faba3cc4778e"} Jan 28 15:25:26 crc kubenswrapper[4893]: I0128 15:25:26.107058 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"9d90311c-4f94-4454-9b2f-65ad0ad28ec9","Type":"ContainerStarted","Data":"291ce4f91e3c496017d58b02389ff63930c2aa8fe14c784d614116ec8a91f147"} Jan 28 15:25:26 crc kubenswrapper[4893]: I0128 15:25:26.118782 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.118763812 podStartE2EDuration="2.118763812s" podCreationTimestamp="2026-01-28 15:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:26.117985422 +0000 UTC m=+1443.891600460" watchObservedRunningTime="2026-01-28 15:25:26.118763812 +0000 UTC m=+1443.892378840" Jan 28 15:25:26 crc kubenswrapper[4893]: I0128 15:25:26.136028 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.136010381 podStartE2EDuration="2.136010381s" podCreationTimestamp="2026-01-28 15:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:26.134256946 +0000 UTC m=+1443.907871964" watchObservedRunningTime="2026-01-28 15:25:26.136010381 +0000 UTC m=+1443.909625409" Jan 28 15:25:26 crc kubenswrapper[4893]: I0128 15:25:26.903615 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="511001de-6aaa-4d6c-8973-4c5a639936f8" path="/var/lib/kubelet/pods/511001de-6aaa-4d6c-8973-4c5a639936f8/volumes" Jan 28 15:25:27 crc kubenswrapper[4893]: I0128 15:25:27.116716 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"9d90311c-4f94-4454-9b2f-65ad0ad28ec9","Type":"ContainerStarted","Data":"ad51c55f5dab353684b43f6c5fc47aae5decee54ce5a41c6a691080ae67ab8bb"} Jan 28 15:25:27 crc kubenswrapper[4893]: I0128 15:25:27.116816 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"9d90311c-4f94-4454-9b2f-65ad0ad28ec9","Type":"ContainerStarted","Data":"8f29d373dd3e2dc531a15d659d44a0c41142859cad1d7fefdb75e6f254d66a21"} Jan 28 15:25:27 crc kubenswrapper[4893]: I0128 15:25:27.144050 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.144027056 podStartE2EDuration="2.144027056s" podCreationTimestamp="2026-01-28 15:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:27.137994979 +0000 UTC m=+1444.911610017" watchObservedRunningTime="2026-01-28 15:25:27.144027056 +0000 UTC m=+1444.917642084" Jan 28 15:25:29 crc kubenswrapper[4893]: I0128 15:25:29.449518 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:25:29 crc kubenswrapper[4893]: I0128 15:25:29.459659 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:29 crc kubenswrapper[4893]: I0128 15:25:29.932928 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk"] Jan 28 15:25:29 crc kubenswrapper[4893]: I0128 15:25:29.934313 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" Jan 28 15:25:29 crc kubenswrapper[4893]: I0128 15:25:29.938080 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 28 15:25:29 crc kubenswrapper[4893]: I0128 15:25:29.938692 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 28 15:25:29 crc kubenswrapper[4893]: I0128 15:25:29.944493 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk"] Jan 28 15:25:29 crc kubenswrapper[4893]: I0128 15:25:29.981422 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de68770-7e4d-4fcb-98de-79c995444045-config-data\") pod \"nova-kuttl-cell1-cell-mapping-v4zxk\" (UID: \"3de68770-7e4d-4fcb-98de-79c995444045\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" Jan 28 15:25:29 crc kubenswrapper[4893]: I0128 15:25:29.981499 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de68770-7e4d-4fcb-98de-79c995444045-scripts\") pod \"nova-kuttl-cell1-cell-mapping-v4zxk\" (UID: \"3de68770-7e4d-4fcb-98de-79c995444045\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" Jan 28 15:25:29 crc kubenswrapper[4893]: I0128 15:25:29.981720 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9rxk\" (UniqueName: \"kubernetes.io/projected/3de68770-7e4d-4fcb-98de-79c995444045-kube-api-access-n9rxk\") pod \"nova-kuttl-cell1-cell-mapping-v4zxk\" (UID: \"3de68770-7e4d-4fcb-98de-79c995444045\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" Jan 28 15:25:30 crc kubenswrapper[4893]: I0128 15:25:30.083077 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de68770-7e4d-4fcb-98de-79c995444045-config-data\") pod \"nova-kuttl-cell1-cell-mapping-v4zxk\" (UID: \"3de68770-7e4d-4fcb-98de-79c995444045\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" Jan 28 15:25:30 crc kubenswrapper[4893]: I0128 15:25:30.083130 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de68770-7e4d-4fcb-98de-79c995444045-scripts\") pod \"nova-kuttl-cell1-cell-mapping-v4zxk\" (UID: \"3de68770-7e4d-4fcb-98de-79c995444045\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" Jan 28 15:25:30 crc kubenswrapper[4893]: I0128 15:25:30.083188 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9rxk\" (UniqueName: \"kubernetes.io/projected/3de68770-7e4d-4fcb-98de-79c995444045-kube-api-access-n9rxk\") pod \"nova-kuttl-cell1-cell-mapping-v4zxk\" (UID: \"3de68770-7e4d-4fcb-98de-79c995444045\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" Jan 28 15:25:30 crc kubenswrapper[4893]: I0128 15:25:30.090523 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de68770-7e4d-4fcb-98de-79c995444045-scripts\") pod \"nova-kuttl-cell1-cell-mapping-v4zxk\" (UID: \"3de68770-7e4d-4fcb-98de-79c995444045\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" Jan 28 15:25:30 crc kubenswrapper[4893]: I0128 15:25:30.098134 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de68770-7e4d-4fcb-98de-79c995444045-config-data\") pod \"nova-kuttl-cell1-cell-mapping-v4zxk\" (UID: \"3de68770-7e4d-4fcb-98de-79c995444045\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" Jan 28 15:25:30 crc kubenswrapper[4893]: I0128 15:25:30.099779 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9rxk\" (UniqueName: \"kubernetes.io/projected/3de68770-7e4d-4fcb-98de-79c995444045-kube-api-access-n9rxk\") pod \"nova-kuttl-cell1-cell-mapping-v4zxk\" (UID: \"3de68770-7e4d-4fcb-98de-79c995444045\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" Jan 28 15:25:30 crc kubenswrapper[4893]: I0128 15:25:30.293686 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" Jan 28 15:25:30 crc kubenswrapper[4893]: I0128 15:25:30.477354 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:30 crc kubenswrapper[4893]: I0128 15:25:30.477757 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:30 crc kubenswrapper[4893]: I0128 15:25:30.709021 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk"] Jan 28 15:25:30 crc kubenswrapper[4893]: W0128 15:25:30.715390 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de68770_7e4d_4fcb_98de_79c995444045.slice/crio-32cde85c852ffd5707f9401a05fa3b7e3b81da152f05e3ac2f2538ef047b8410 WatchSource:0}: Error finding container 32cde85c852ffd5707f9401a05fa3b7e3b81da152f05e3ac2f2538ef047b8410: Status 404 returned error can't find the container with id 32cde85c852ffd5707f9401a05fa3b7e3b81da152f05e3ac2f2538ef047b8410 Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.005781 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qkdc4"] Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.008225 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.020866 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qkdc4"] Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.156692 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" event={"ID":"3de68770-7e4d-4fcb-98de-79c995444045","Type":"ContainerStarted","Data":"b40b81269e6b5fcf655bb9fbc81abfab97ed34a4392813fc8bc15ae71afaa3c7"} Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.156735 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" event={"ID":"3de68770-7e4d-4fcb-98de-79c995444045","Type":"ContainerStarted","Data":"32cde85c852ffd5707f9401a05fa3b7e3b81da152f05e3ac2f2538ef047b8410"} Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.203119 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7tjp\" (UniqueName: \"kubernetes.io/projected/a91c49bf-667f-4ec6-a996-6bad3ac2886f-kube-api-access-h7tjp\") pod \"redhat-operators-qkdc4\" (UID: \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\") " pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.203226 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a91c49bf-667f-4ec6-a996-6bad3ac2886f-catalog-content\") pod \"redhat-operators-qkdc4\" (UID: \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\") " pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.204359 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a91c49bf-667f-4ec6-a996-6bad3ac2886f-utilities\") pod \"redhat-operators-qkdc4\" (UID: \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\") " pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.305893 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a91c49bf-667f-4ec6-a996-6bad3ac2886f-utilities\") pod \"redhat-operators-qkdc4\" (UID: \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\") " pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.305978 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7tjp\" (UniqueName: \"kubernetes.io/projected/a91c49bf-667f-4ec6-a996-6bad3ac2886f-kube-api-access-h7tjp\") pod \"redhat-operators-qkdc4\" (UID: \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\") " pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.306067 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a91c49bf-667f-4ec6-a996-6bad3ac2886f-catalog-content\") pod \"redhat-operators-qkdc4\" (UID: \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\") " pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.306362 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a91c49bf-667f-4ec6-a996-6bad3ac2886f-utilities\") pod \"redhat-operators-qkdc4\" (UID: \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\") " pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.306446 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a91c49bf-667f-4ec6-a996-6bad3ac2886f-catalog-content\") pod \"redhat-operators-qkdc4\" (UID: \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\") " pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.324959 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7tjp\" (UniqueName: \"kubernetes.io/projected/a91c49bf-667f-4ec6-a996-6bad3ac2886f-kube-api-access-h7tjp\") pod \"redhat-operators-qkdc4\" (UID: \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\") " pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.332026 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.769979 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" podStartSLOduration=2.769954656 podStartE2EDuration="2.769954656s" podCreationTimestamp="2026-01-28 15:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:31.176667161 +0000 UTC m=+1448.950282199" watchObservedRunningTime="2026-01-28 15:25:31.769954656 +0000 UTC m=+1449.543569684" Jan 28 15:25:31 crc kubenswrapper[4893]: I0128 15:25:31.773315 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qkdc4"] Jan 28 15:25:32 crc kubenswrapper[4893]: I0128 15:25:32.166986 4893 generic.go:334] "Generic (PLEG): container finished" podID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerID="5268147d6f66717a7acb2ff88880a9b8662d31b58afdfb09dc74ab1c23bc68b3" exitCode=0 Jan 28 15:25:32 crc kubenswrapper[4893]: I0128 15:25:32.167042 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkdc4" event={"ID":"a91c49bf-667f-4ec6-a996-6bad3ac2886f","Type":"ContainerDied","Data":"5268147d6f66717a7acb2ff88880a9b8662d31b58afdfb09dc74ab1c23bc68b3"} Jan 28 15:25:32 crc kubenswrapper[4893]: I0128 15:25:32.167299 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkdc4" event={"ID":"a91c49bf-667f-4ec6-a996-6bad3ac2886f","Type":"ContainerStarted","Data":"e6be83fc8960f6668990eecf9b8be0a6c1cc8ec0af5019e9dd4efbd1dffdeef9"} Jan 28 15:25:33 crc kubenswrapper[4893]: I0128 15:25:33.176108 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkdc4" event={"ID":"a91c49bf-667f-4ec6-a996-6bad3ac2886f","Type":"ContainerStarted","Data":"f374a8a412200772a5cddfca6cac9003ed999280b4366673a65748294c1a4cae"} Jan 28 15:25:34 crc kubenswrapper[4893]: I0128 15:25:34.459505 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:34 crc kubenswrapper[4893]: I0128 15:25:34.481374 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:34 crc kubenswrapper[4893]: I0128 15:25:34.871462 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:34 crc kubenswrapper[4893]: I0128 15:25:34.871546 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:35 crc kubenswrapper[4893]: I0128 15:25:35.223142 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:35 crc kubenswrapper[4893]: I0128 15:25:35.477512 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:35 crc kubenswrapper[4893]: I0128 15:25:35.477604 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:35 crc kubenswrapper[4893]: I0128 15:25:35.722806 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:25:35 crc kubenswrapper[4893]: I0128 15:25:35.722876 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:25:35 crc kubenswrapper[4893]: I0128 15:25:35.722922 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:25:35 crc kubenswrapper[4893]: I0128 15:25:35.723668 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8e5d57be71719656edc4624e7904c0b8f16b72637bcea1f2d833d180bb5c4bd"} pod="openshift-machine-config-operator/machine-config-daemon-l2nht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:25:35 crc kubenswrapper[4893]: I0128 15:25:35.723725 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" containerID="cri-o://d8e5d57be71719656edc4624e7904c0b8f16b72637bcea1f2d833d180bb5c4bd" gracePeriod=600 Jan 28 15:25:35 crc kubenswrapper[4893]: I0128 15:25:35.954737 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="b6a352f2-cd25-4db6-a176-3b588b69090b" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.131:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:35 crc kubenswrapper[4893]: I0128 15:25:35.954900 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="b6a352f2-cd25-4db6-a176-3b588b69090b" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.131:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:36 crc kubenswrapper[4893]: I0128 15:25:36.202222 4893 generic.go:334] "Generic (PLEG): container finished" podID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerID="d8e5d57be71719656edc4624e7904c0b8f16b72637bcea1f2d833d180bb5c4bd" exitCode=0 Jan 28 15:25:36 crc kubenswrapper[4893]: I0128 15:25:36.202306 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerDied","Data":"d8e5d57be71719656edc4624e7904c0b8f16b72637bcea1f2d833d180bb5c4bd"} Jan 28 15:25:36 crc kubenswrapper[4893]: I0128 15:25:36.202347 4893 scope.go:117] "RemoveContainer" containerID="eaa47c5c31906ab74e7bc044988a1088092bc8e70af984b1414760728f1c9f6e" Jan 28 15:25:36 crc kubenswrapper[4893]: I0128 15:25:36.208097 4893 generic.go:334] "Generic (PLEG): container finished" podID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerID="f374a8a412200772a5cddfca6cac9003ed999280b4366673a65748294c1a4cae" exitCode=0 Jan 28 15:25:36 crc kubenswrapper[4893]: I0128 15:25:36.208189 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkdc4" event={"ID":"a91c49bf-667f-4ec6-a996-6bad3ac2886f","Type":"ContainerDied","Data":"f374a8a412200772a5cddfca6cac9003ed999280b4366673a65748294c1a4cae"} Jan 28 15:25:36 crc kubenswrapper[4893]: I0128 15:25:36.559701 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="9d90311c-4f94-4454-9b2f-65ad0ad28ec9" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.132:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:36 crc kubenswrapper[4893]: I0128 15:25:36.559715 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="9d90311c-4f94-4454-9b2f-65ad0ad28ec9" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.132:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:37 crc kubenswrapper[4893]: I0128 15:25:37.218582 4893 generic.go:334] "Generic (PLEG): container finished" podID="3de68770-7e4d-4fcb-98de-79c995444045" containerID="b40b81269e6b5fcf655bb9fbc81abfab97ed34a4392813fc8bc15ae71afaa3c7" exitCode=0 Jan 28 15:25:37 crc kubenswrapper[4893]: I0128 15:25:37.218668 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" event={"ID":"3de68770-7e4d-4fcb-98de-79c995444045","Type":"ContainerDied","Data":"b40b81269e6b5fcf655bb9fbc81abfab97ed34a4392813fc8bc15ae71afaa3c7"} Jan 28 15:25:37 crc kubenswrapper[4893]: I0128 15:25:37.232087 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkdc4" event={"ID":"a91c49bf-667f-4ec6-a996-6bad3ac2886f","Type":"ContainerStarted","Data":"a045cf9543e4813fa13a36c6c7f4950e9c878ee91a268a8f865199067214dfcd"} Jan 28 15:25:37 crc kubenswrapper[4893]: I0128 15:25:37.265316 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qkdc4" podStartSLOduration=2.41388955 podStartE2EDuration="7.265296427s" podCreationTimestamp="2026-01-28 15:25:30 +0000 UTC" firstStartedPulling="2026-01-28 15:25:32.168630528 +0000 UTC m=+1449.942245566" lastFinishedPulling="2026-01-28 15:25:37.020037415 +0000 UTC m=+1454.793652443" observedRunningTime="2026-01-28 15:25:37.256107278 +0000 UTC m=+1455.029722306" watchObservedRunningTime="2026-01-28 15:25:37.265296427 +0000 UTC m=+1455.038911455" Jan 28 15:25:38 crc kubenswrapper[4893]: I0128 15:25:38.247209 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerStarted","Data":"79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51"} Jan 28 15:25:38 crc kubenswrapper[4893]: I0128 15:25:38.693828 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" Jan 28 15:25:38 crc kubenswrapper[4893]: I0128 15:25:38.826171 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de68770-7e4d-4fcb-98de-79c995444045-scripts\") pod \"3de68770-7e4d-4fcb-98de-79c995444045\" (UID: \"3de68770-7e4d-4fcb-98de-79c995444045\") " Jan 28 15:25:38 crc kubenswrapper[4893]: I0128 15:25:38.826284 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de68770-7e4d-4fcb-98de-79c995444045-config-data\") pod \"3de68770-7e4d-4fcb-98de-79c995444045\" (UID: \"3de68770-7e4d-4fcb-98de-79c995444045\") " Jan 28 15:25:38 crc kubenswrapper[4893]: I0128 15:25:38.826378 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9rxk\" (UniqueName: \"kubernetes.io/projected/3de68770-7e4d-4fcb-98de-79c995444045-kube-api-access-n9rxk\") pod \"3de68770-7e4d-4fcb-98de-79c995444045\" (UID: \"3de68770-7e4d-4fcb-98de-79c995444045\") " Jan 28 15:25:38 crc kubenswrapper[4893]: I0128 15:25:38.832268 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de68770-7e4d-4fcb-98de-79c995444045-scripts" (OuterVolumeSpecName: "scripts") pod "3de68770-7e4d-4fcb-98de-79c995444045" (UID: "3de68770-7e4d-4fcb-98de-79c995444045"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:38 crc kubenswrapper[4893]: I0128 15:25:38.832538 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de68770-7e4d-4fcb-98de-79c995444045-kube-api-access-n9rxk" (OuterVolumeSpecName: "kube-api-access-n9rxk") pod "3de68770-7e4d-4fcb-98de-79c995444045" (UID: "3de68770-7e4d-4fcb-98de-79c995444045"). InnerVolumeSpecName "kube-api-access-n9rxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:38 crc kubenswrapper[4893]: I0128 15:25:38.852181 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de68770-7e4d-4fcb-98de-79c995444045-config-data" (OuterVolumeSpecName: "config-data") pod "3de68770-7e4d-4fcb-98de-79c995444045" (UID: "3de68770-7e4d-4fcb-98de-79c995444045"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:38 crc kubenswrapper[4893]: I0128 15:25:38.928055 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9rxk\" (UniqueName: \"kubernetes.io/projected/3de68770-7e4d-4fcb-98de-79c995444045-kube-api-access-n9rxk\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:38 crc kubenswrapper[4893]: I0128 15:25:38.928099 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de68770-7e4d-4fcb-98de-79c995444045-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:38 crc kubenswrapper[4893]: I0128 15:25:38.928111 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de68770-7e4d-4fcb-98de-79c995444045-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:39 crc kubenswrapper[4893]: I0128 15:25:39.255450 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" event={"ID":"3de68770-7e4d-4fcb-98de-79c995444045","Type":"ContainerDied","Data":"32cde85c852ffd5707f9401a05fa3b7e3b81da152f05e3ac2f2538ef047b8410"} Jan 28 15:25:39 crc kubenswrapper[4893]: I0128 15:25:39.255515 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32cde85c852ffd5707f9401a05fa3b7e3b81da152f05e3ac2f2538ef047b8410" Jan 28 15:25:39 crc kubenswrapper[4893]: I0128 15:25:39.255493 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk" Jan 28 15:25:39 crc kubenswrapper[4893]: I0128 15:25:39.532801 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:39 crc kubenswrapper[4893]: I0128 15:25:39.533454 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="b6a352f2-cd25-4db6-a176-3b588b69090b" containerName="nova-kuttl-api-log" containerID="cri-o://39b15ec90e0dc314d096277a31cdad18469b55883ef43179688aa28a17f3a61a" gracePeriod=30 Jan 28 15:25:39 crc kubenswrapper[4893]: I0128 15:25:39.533568 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="b6a352f2-cd25-4db6-a176-3b588b69090b" containerName="nova-kuttl-api-api" containerID="cri-o://dd3369ccb4f382e630cedf3ce9a3c8c0a4f25edaff1f127682ddd34489177fa6" gracePeriod=30 Jan 28 15:25:39 crc kubenswrapper[4893]: I0128 15:25:39.548628 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:39 crc kubenswrapper[4893]: I0128 15:25:39.548877 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="02942b90-ffb9-4923-9081-dad14f2c1b5a" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://6f99f01c5cfb076c514bf061e7003db4965f1fa45cc52737bca7faba3cc4778e" gracePeriod=30 Jan 28 15:25:39 crc kubenswrapper[4893]: I0128 15:25:39.629154 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:39 crc kubenswrapper[4893]: I0128 15:25:39.629409 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="9d90311c-4f94-4454-9b2f-65ad0ad28ec9" containerName="nova-kuttl-metadata-log" containerID="cri-o://8f29d373dd3e2dc531a15d659d44a0c41142859cad1d7fefdb75e6f254d66a21" gracePeriod=30 Jan 28 15:25:39 crc kubenswrapper[4893]: I0128 15:25:39.629596 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="9d90311c-4f94-4454-9b2f-65ad0ad28ec9" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://ad51c55f5dab353684b43f6c5fc47aae5decee54ce5a41c6a691080ae67ab8bb" gracePeriod=30 Jan 28 15:25:40 crc kubenswrapper[4893]: I0128 15:25:40.266817 4893 generic.go:334] "Generic (PLEG): container finished" podID="b6a352f2-cd25-4db6-a176-3b588b69090b" containerID="39b15ec90e0dc314d096277a31cdad18469b55883ef43179688aa28a17f3a61a" exitCode=143 Jan 28 15:25:40 crc kubenswrapper[4893]: I0128 15:25:40.266889 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"b6a352f2-cd25-4db6-a176-3b588b69090b","Type":"ContainerDied","Data":"39b15ec90e0dc314d096277a31cdad18469b55883ef43179688aa28a17f3a61a"} Jan 28 15:25:40 crc kubenswrapper[4893]: I0128 15:25:40.269457 4893 generic.go:334] "Generic (PLEG): container finished" podID="9d90311c-4f94-4454-9b2f-65ad0ad28ec9" containerID="8f29d373dd3e2dc531a15d659d44a0c41142859cad1d7fefdb75e6f254d66a21" exitCode=143 Jan 28 15:25:40 crc kubenswrapper[4893]: I0128 15:25:40.269494 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"9d90311c-4f94-4454-9b2f-65ad0ad28ec9","Type":"ContainerDied","Data":"8f29d373dd3e2dc531a15d659d44a0c41142859cad1d7fefdb75e6f254d66a21"} Jan 28 15:25:41 crc kubenswrapper[4893]: I0128 15:25:41.332559 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:25:41 crc kubenswrapper[4893]: I0128 15:25:41.332968 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.002038 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.183466 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02942b90-ffb9-4923-9081-dad14f2c1b5a-config-data\") pod \"02942b90-ffb9-4923-9081-dad14f2c1b5a\" (UID: \"02942b90-ffb9-4923-9081-dad14f2c1b5a\") " Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.183862 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q6m4\" (UniqueName: \"kubernetes.io/projected/02942b90-ffb9-4923-9081-dad14f2c1b5a-kube-api-access-7q6m4\") pod \"02942b90-ffb9-4923-9081-dad14f2c1b5a\" (UID: \"02942b90-ffb9-4923-9081-dad14f2c1b5a\") " Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.191423 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02942b90-ffb9-4923-9081-dad14f2c1b5a-kube-api-access-7q6m4" (OuterVolumeSpecName: "kube-api-access-7q6m4") pod "02942b90-ffb9-4923-9081-dad14f2c1b5a" (UID: "02942b90-ffb9-4923-9081-dad14f2c1b5a"). InnerVolumeSpecName "kube-api-access-7q6m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.204420 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02942b90-ffb9-4923-9081-dad14f2c1b5a-config-data" (OuterVolumeSpecName: "config-data") pod "02942b90-ffb9-4923-9081-dad14f2c1b5a" (UID: "02942b90-ffb9-4923-9081-dad14f2c1b5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.285543 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02942b90-ffb9-4923-9081-dad14f2c1b5a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.285588 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q6m4\" (UniqueName: \"kubernetes.io/projected/02942b90-ffb9-4923-9081-dad14f2c1b5a-kube-api-access-7q6m4\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.287114 4893 generic.go:334] "Generic (PLEG): container finished" podID="02942b90-ffb9-4923-9081-dad14f2c1b5a" containerID="6f99f01c5cfb076c514bf061e7003db4965f1fa45cc52737bca7faba3cc4778e" exitCode=0 Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.287185 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"02942b90-ffb9-4923-9081-dad14f2c1b5a","Type":"ContainerDied","Data":"6f99f01c5cfb076c514bf061e7003db4965f1fa45cc52737bca7faba3cc4778e"} Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.287218 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"02942b90-ffb9-4923-9081-dad14f2c1b5a","Type":"ContainerDied","Data":"9573e22dd638568a10272da6f812b9180de5e881b40ff35c612f7952a313e559"} Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.287243 4893 scope.go:117] "RemoveContainer" containerID="6f99f01c5cfb076c514bf061e7003db4965f1fa45cc52737bca7faba3cc4778e" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.287243 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.310680 4893 scope.go:117] "RemoveContainer" containerID="6f99f01c5cfb076c514bf061e7003db4965f1fa45cc52737bca7faba3cc4778e" Jan 28 15:25:42 crc kubenswrapper[4893]: E0128 15:25:42.311124 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f99f01c5cfb076c514bf061e7003db4965f1fa45cc52737bca7faba3cc4778e\": container with ID starting with 6f99f01c5cfb076c514bf061e7003db4965f1fa45cc52737bca7faba3cc4778e not found: ID does not exist" containerID="6f99f01c5cfb076c514bf061e7003db4965f1fa45cc52737bca7faba3cc4778e" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.311205 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f99f01c5cfb076c514bf061e7003db4965f1fa45cc52737bca7faba3cc4778e"} err="failed to get container status \"6f99f01c5cfb076c514bf061e7003db4965f1fa45cc52737bca7faba3cc4778e\": rpc error: code = NotFound desc = could not find container \"6f99f01c5cfb076c514bf061e7003db4965f1fa45cc52737bca7faba3cc4778e\": container with ID starting with 6f99f01c5cfb076c514bf061e7003db4965f1fa45cc52737bca7faba3cc4778e not found: ID does not exist" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.325383 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.332243 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.344959 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:42 crc kubenswrapper[4893]: E0128 15:25:42.345412 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de68770-7e4d-4fcb-98de-79c995444045" containerName="nova-manage" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.345430 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de68770-7e4d-4fcb-98de-79c995444045" containerName="nova-manage" Jan 28 15:25:42 crc kubenswrapper[4893]: E0128 15:25:42.345464 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02942b90-ffb9-4923-9081-dad14f2c1b5a" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.345495 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="02942b90-ffb9-4923-9081-dad14f2c1b5a" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.345685 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de68770-7e4d-4fcb-98de-79c995444045" containerName="nova-manage" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.345700 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="02942b90-ffb9-4923-9081-dad14f2c1b5a" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.346318 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.349439 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.355236 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.404841 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgvts\" (UniqueName: \"kubernetes.io/projected/c157b73b-8217-4593-bfa1-ed8b0191ec7e-kube-api-access-pgvts\") pod \"nova-kuttl-scheduler-0\" (UID: \"c157b73b-8217-4593-bfa1-ed8b0191ec7e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.404958 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c157b73b-8217-4593-bfa1-ed8b0191ec7e-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"c157b73b-8217-4593-bfa1-ed8b0191ec7e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.406526 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qkdc4" podUID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerName="registry-server" probeResult="failure" output=< Jan 28 15:25:42 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 28 15:25:42 crc kubenswrapper[4893]: > Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.506148 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c157b73b-8217-4593-bfa1-ed8b0191ec7e-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"c157b73b-8217-4593-bfa1-ed8b0191ec7e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.506239 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgvts\" (UniqueName: \"kubernetes.io/projected/c157b73b-8217-4593-bfa1-ed8b0191ec7e-kube-api-access-pgvts\") pod \"nova-kuttl-scheduler-0\" (UID: \"c157b73b-8217-4593-bfa1-ed8b0191ec7e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.511070 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c157b73b-8217-4593-bfa1-ed8b0191ec7e-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"c157b73b-8217-4593-bfa1-ed8b0191ec7e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.523301 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgvts\" (UniqueName: \"kubernetes.io/projected/c157b73b-8217-4593-bfa1-ed8b0191ec7e-kube-api-access-pgvts\") pod \"nova-kuttl-scheduler-0\" (UID: \"c157b73b-8217-4593-bfa1-ed8b0191ec7e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.717886 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:42 crc kubenswrapper[4893]: I0128 15:25:42.904678 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02942b90-ffb9-4923-9081-dad14f2c1b5a" path="/var/lib/kubelet/pods/02942b90-ffb9-4923-9081-dad14f2c1b5a/volumes" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.258258 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:43 crc kubenswrapper[4893]: W0128 15:25:43.265096 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc157b73b_8217_4593_bfa1_ed8b0191ec7e.slice/crio-dfc5c538b748943a0ed84024509d92613cdb22e60aab23af4d0a64e686932283 WatchSource:0}: Error finding container dfc5c538b748943a0ed84024509d92613cdb22e60aab23af4d0a64e686932283: Status 404 returned error can't find the container with id dfc5c538b748943a0ed84024509d92613cdb22e60aab23af4d0a64e686932283 Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.269345 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.281429 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.313810 4893 generic.go:334] "Generic (PLEG): container finished" podID="b6a352f2-cd25-4db6-a176-3b588b69090b" containerID="dd3369ccb4f382e630cedf3ce9a3c8c0a4f25edaff1f127682ddd34489177fa6" exitCode=0 Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.314037 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"b6a352f2-cd25-4db6-a176-3b588b69090b","Type":"ContainerDied","Data":"dd3369ccb4f382e630cedf3ce9a3c8c0a4f25edaff1f127682ddd34489177fa6"} Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.314286 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"b6a352f2-cd25-4db6-a176-3b588b69090b","Type":"ContainerDied","Data":"04fdc85aef5bb6cb1e4f01900b69140cd2a8da1fe379c4852c8d2389c1e4a780"} Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.314358 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.314426 4893 scope.go:117] "RemoveContainer" containerID="dd3369ccb4f382e630cedf3ce9a3c8c0a4f25edaff1f127682ddd34489177fa6" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.324253 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"c157b73b-8217-4593-bfa1-ed8b0191ec7e","Type":"ContainerStarted","Data":"dfc5c538b748943a0ed84024509d92613cdb22e60aab23af4d0a64e686932283"} Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.326953 4893 generic.go:334] "Generic (PLEG): container finished" podID="9d90311c-4f94-4454-9b2f-65ad0ad28ec9" containerID="ad51c55f5dab353684b43f6c5fc47aae5decee54ce5a41c6a691080ae67ab8bb" exitCode=0 Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.326994 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"9d90311c-4f94-4454-9b2f-65ad0ad28ec9","Type":"ContainerDied","Data":"ad51c55f5dab353684b43f6c5fc47aae5decee54ce5a41c6a691080ae67ab8bb"} Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.327019 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"9d90311c-4f94-4454-9b2f-65ad0ad28ec9","Type":"ContainerDied","Data":"291ce4f91e3c496017d58b02389ff63930c2aa8fe14c784d614116ec8a91f147"} Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.327075 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.340438 4893 scope.go:117] "RemoveContainer" containerID="39b15ec90e0dc314d096277a31cdad18469b55883ef43179688aa28a17f3a61a" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.363404 4893 scope.go:117] "RemoveContainer" containerID="dd3369ccb4f382e630cedf3ce9a3c8c0a4f25edaff1f127682ddd34489177fa6" Jan 28 15:25:43 crc kubenswrapper[4893]: E0128 15:25:43.363838 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd3369ccb4f382e630cedf3ce9a3c8c0a4f25edaff1f127682ddd34489177fa6\": container with ID starting with dd3369ccb4f382e630cedf3ce9a3c8c0a4f25edaff1f127682ddd34489177fa6 not found: ID does not exist" containerID="dd3369ccb4f382e630cedf3ce9a3c8c0a4f25edaff1f127682ddd34489177fa6" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.363890 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd3369ccb4f382e630cedf3ce9a3c8c0a4f25edaff1f127682ddd34489177fa6"} err="failed to get container status \"dd3369ccb4f382e630cedf3ce9a3c8c0a4f25edaff1f127682ddd34489177fa6\": rpc error: code = NotFound desc = could not find container \"dd3369ccb4f382e630cedf3ce9a3c8c0a4f25edaff1f127682ddd34489177fa6\": container with ID starting with dd3369ccb4f382e630cedf3ce9a3c8c0a4f25edaff1f127682ddd34489177fa6 not found: ID does not exist" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.363942 4893 scope.go:117] "RemoveContainer" containerID="39b15ec90e0dc314d096277a31cdad18469b55883ef43179688aa28a17f3a61a" Jan 28 15:25:43 crc kubenswrapper[4893]: E0128 15:25:43.364253 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39b15ec90e0dc314d096277a31cdad18469b55883ef43179688aa28a17f3a61a\": container with ID starting with 39b15ec90e0dc314d096277a31cdad18469b55883ef43179688aa28a17f3a61a not found: ID does not exist" containerID="39b15ec90e0dc314d096277a31cdad18469b55883ef43179688aa28a17f3a61a" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.364288 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39b15ec90e0dc314d096277a31cdad18469b55883ef43179688aa28a17f3a61a"} err="failed to get container status \"39b15ec90e0dc314d096277a31cdad18469b55883ef43179688aa28a17f3a61a\": rpc error: code = NotFound desc = could not find container \"39b15ec90e0dc314d096277a31cdad18469b55883ef43179688aa28a17f3a61a\": container with ID starting with 39b15ec90e0dc314d096277a31cdad18469b55883ef43179688aa28a17f3a61a not found: ID does not exist" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.364301 4893 scope.go:117] "RemoveContainer" containerID="ad51c55f5dab353684b43f6c5fc47aae5decee54ce5a41c6a691080ae67ab8bb" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.393259 4893 scope.go:117] "RemoveContainer" containerID="8f29d373dd3e2dc531a15d659d44a0c41142859cad1d7fefdb75e6f254d66a21" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.416550 4893 scope.go:117] "RemoveContainer" containerID="ad51c55f5dab353684b43f6c5fc47aae5decee54ce5a41c6a691080ae67ab8bb" Jan 28 15:25:43 crc kubenswrapper[4893]: E0128 15:25:43.417147 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad51c55f5dab353684b43f6c5fc47aae5decee54ce5a41c6a691080ae67ab8bb\": container with ID starting with ad51c55f5dab353684b43f6c5fc47aae5decee54ce5a41c6a691080ae67ab8bb not found: ID does not exist" containerID="ad51c55f5dab353684b43f6c5fc47aae5decee54ce5a41c6a691080ae67ab8bb" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.417205 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad51c55f5dab353684b43f6c5fc47aae5decee54ce5a41c6a691080ae67ab8bb"} err="failed to get container status \"ad51c55f5dab353684b43f6c5fc47aae5decee54ce5a41c6a691080ae67ab8bb\": rpc error: code = NotFound desc = could not find container \"ad51c55f5dab353684b43f6c5fc47aae5decee54ce5a41c6a691080ae67ab8bb\": container with ID starting with ad51c55f5dab353684b43f6c5fc47aae5decee54ce5a41c6a691080ae67ab8bb not found: ID does not exist" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.417243 4893 scope.go:117] "RemoveContainer" containerID="8f29d373dd3e2dc531a15d659d44a0c41142859cad1d7fefdb75e6f254d66a21" Jan 28 15:25:43 crc kubenswrapper[4893]: E0128 15:25:43.417575 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f29d373dd3e2dc531a15d659d44a0c41142859cad1d7fefdb75e6f254d66a21\": container with ID starting with 8f29d373dd3e2dc531a15d659d44a0c41142859cad1d7fefdb75e6f254d66a21 not found: ID does not exist" containerID="8f29d373dd3e2dc531a15d659d44a0c41142859cad1d7fefdb75e6f254d66a21" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.417607 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f29d373dd3e2dc531a15d659d44a0c41142859cad1d7fefdb75e6f254d66a21"} err="failed to get container status \"8f29d373dd3e2dc531a15d659d44a0c41142859cad1d7fefdb75e6f254d66a21\": rpc error: code = NotFound desc = could not find container \"8f29d373dd3e2dc531a15d659d44a0c41142859cad1d7fefdb75e6f254d66a21\": container with ID starting with 8f29d373dd3e2dc531a15d659d44a0c41142859cad1d7fefdb75e6f254d66a21 not found: ID does not exist" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.424746 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-config-data\") pod \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\" (UID: \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\") " Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.424799 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmsln\" (UniqueName: \"kubernetes.io/projected/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-kube-api-access-lmsln\") pod \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\" (UID: \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\") " Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.424820 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6a352f2-cd25-4db6-a176-3b588b69090b-logs\") pod \"b6a352f2-cd25-4db6-a176-3b588b69090b\" (UID: \"b6a352f2-cd25-4db6-a176-3b588b69090b\") " Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.424940 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n866d\" (UniqueName: \"kubernetes.io/projected/b6a352f2-cd25-4db6-a176-3b588b69090b-kube-api-access-n866d\") pod \"b6a352f2-cd25-4db6-a176-3b588b69090b\" (UID: \"b6a352f2-cd25-4db6-a176-3b588b69090b\") " Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.425063 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-logs\") pod \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\" (UID: \"9d90311c-4f94-4454-9b2f-65ad0ad28ec9\") " Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.425094 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6a352f2-cd25-4db6-a176-3b588b69090b-config-data\") pod \"b6a352f2-cd25-4db6-a176-3b588b69090b\" (UID: \"b6a352f2-cd25-4db6-a176-3b588b69090b\") " Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.425722 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-logs" (OuterVolumeSpecName: "logs") pod "9d90311c-4f94-4454-9b2f-65ad0ad28ec9" (UID: "9d90311c-4f94-4454-9b2f-65ad0ad28ec9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.426131 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6a352f2-cd25-4db6-a176-3b588b69090b-logs" (OuterVolumeSpecName: "logs") pod "b6a352f2-cd25-4db6-a176-3b588b69090b" (UID: "b6a352f2-cd25-4db6-a176-3b588b69090b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.432020 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-kube-api-access-lmsln" (OuterVolumeSpecName: "kube-api-access-lmsln") pod "9d90311c-4f94-4454-9b2f-65ad0ad28ec9" (UID: "9d90311c-4f94-4454-9b2f-65ad0ad28ec9"). InnerVolumeSpecName "kube-api-access-lmsln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.432277 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6a352f2-cd25-4db6-a176-3b588b69090b-kube-api-access-n866d" (OuterVolumeSpecName: "kube-api-access-n866d") pod "b6a352f2-cd25-4db6-a176-3b588b69090b" (UID: "b6a352f2-cd25-4db6-a176-3b588b69090b"). InnerVolumeSpecName "kube-api-access-n866d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.451945 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6a352f2-cd25-4db6-a176-3b588b69090b-config-data" (OuterVolumeSpecName: "config-data") pod "b6a352f2-cd25-4db6-a176-3b588b69090b" (UID: "b6a352f2-cd25-4db6-a176-3b588b69090b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.453623 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-config-data" (OuterVolumeSpecName: "config-data") pod "9d90311c-4f94-4454-9b2f-65ad0ad28ec9" (UID: "9d90311c-4f94-4454-9b2f-65ad0ad28ec9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.527083 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n866d\" (UniqueName: \"kubernetes.io/projected/b6a352f2-cd25-4db6-a176-3b588b69090b-kube-api-access-n866d\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.527128 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.527142 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6a352f2-cd25-4db6-a176-3b588b69090b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.527155 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.527172 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmsln\" (UniqueName: \"kubernetes.io/projected/9d90311c-4f94-4454-9b2f-65ad0ad28ec9-kube-api-access-lmsln\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.527183 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6a352f2-cd25-4db6-a176-3b588b69090b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.651835 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.667556 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.678620 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.687264 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.755631 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:43 crc kubenswrapper[4893]: E0128 15:25:43.756566 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d90311c-4f94-4454-9b2f-65ad0ad28ec9" containerName="nova-kuttl-metadata-metadata" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.756589 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d90311c-4f94-4454-9b2f-65ad0ad28ec9" containerName="nova-kuttl-metadata-metadata" Jan 28 15:25:43 crc kubenswrapper[4893]: E0128 15:25:43.756620 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6a352f2-cd25-4db6-a176-3b588b69090b" containerName="nova-kuttl-api-log" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.756627 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6a352f2-cd25-4db6-a176-3b588b69090b" containerName="nova-kuttl-api-log" Jan 28 15:25:43 crc kubenswrapper[4893]: E0128 15:25:43.756634 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6a352f2-cd25-4db6-a176-3b588b69090b" containerName="nova-kuttl-api-api" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.756641 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6a352f2-cd25-4db6-a176-3b588b69090b" containerName="nova-kuttl-api-api" Jan 28 15:25:43 crc kubenswrapper[4893]: E0128 15:25:43.756673 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d90311c-4f94-4454-9b2f-65ad0ad28ec9" containerName="nova-kuttl-metadata-log" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.756679 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d90311c-4f94-4454-9b2f-65ad0ad28ec9" containerName="nova-kuttl-metadata-log" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.756972 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d90311c-4f94-4454-9b2f-65ad0ad28ec9" containerName="nova-kuttl-metadata-metadata" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.757006 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6a352f2-cd25-4db6-a176-3b588b69090b" containerName="nova-kuttl-api-log" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.757033 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6a352f2-cd25-4db6-a176-3b588b69090b" containerName="nova-kuttl-api-api" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.757058 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d90311c-4f94-4454-9b2f-65ad0ad28ec9" containerName="nova-kuttl-metadata-log" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.758974 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.772793 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.807942 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.813211 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.818272 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.849329 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.858934 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.941722 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79781d2a-b011-48ea-a6ef-038161633a26-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"79781d2a-b011-48ea-a6ef-038161633a26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.941829 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pllm8\" (UniqueName: \"kubernetes.io/projected/79781d2a-b011-48ea-a6ef-038161633a26-kube-api-access-pllm8\") pod \"nova-kuttl-metadata-0\" (UID: \"79781d2a-b011-48ea-a6ef-038161633a26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.941917 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79781d2a-b011-48ea-a6ef-038161633a26-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"79781d2a-b011-48ea-a6ef-038161633a26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.941972 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.942013 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-logs\") pod \"nova-kuttl-api-0\" (UID: \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:43 crc kubenswrapper[4893]: I0128 15:25:43.942096 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92z74\" (UniqueName: \"kubernetes.io/projected/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-kube-api-access-92z74\") pod \"nova-kuttl-api-0\" (UID: \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.043501 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.043570 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-logs\") pod \"nova-kuttl-api-0\" (UID: \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.043639 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92z74\" (UniqueName: \"kubernetes.io/projected/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-kube-api-access-92z74\") pod \"nova-kuttl-api-0\" (UID: \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.043835 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79781d2a-b011-48ea-a6ef-038161633a26-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"79781d2a-b011-48ea-a6ef-038161633a26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.043934 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pllm8\" (UniqueName: \"kubernetes.io/projected/79781d2a-b011-48ea-a6ef-038161633a26-kube-api-access-pllm8\") pod \"nova-kuttl-metadata-0\" (UID: \"79781d2a-b011-48ea-a6ef-038161633a26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.043985 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79781d2a-b011-48ea-a6ef-038161633a26-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"79781d2a-b011-48ea-a6ef-038161633a26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.044907 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-logs\") pod \"nova-kuttl-api-0\" (UID: \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.045227 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79781d2a-b011-48ea-a6ef-038161633a26-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"79781d2a-b011-48ea-a6ef-038161633a26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.049969 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79781d2a-b011-48ea-a6ef-038161633a26-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"79781d2a-b011-48ea-a6ef-038161633a26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.050952 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.063001 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92z74\" (UniqueName: \"kubernetes.io/projected/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-kube-api-access-92z74\") pod \"nova-kuttl-api-0\" (UID: \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.079186 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pllm8\" (UniqueName: \"kubernetes.io/projected/79781d2a-b011-48ea-a6ef-038161633a26-kube-api-access-pllm8\") pod \"nova-kuttl-metadata-0\" (UID: \"79781d2a-b011-48ea-a6ef-038161633a26\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.115601 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.158465 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.339442 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"c157b73b-8217-4593-bfa1-ed8b0191ec7e","Type":"ContainerStarted","Data":"e03291fba53d205068c9d3d3235f45af59ca514e95f692441d014b55a7efc62b"} Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.358446 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.358425116 podStartE2EDuration="2.358425116s" podCreationTimestamp="2026-01-28 15:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:44.352683087 +0000 UTC m=+1462.126298115" watchObservedRunningTime="2026-01-28 15:25:44.358425116 +0000 UTC m=+1462.132040144" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.610116 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.676856 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.902313 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d90311c-4f94-4454-9b2f-65ad0ad28ec9" path="/var/lib/kubelet/pods/9d90311c-4f94-4454-9b2f-65ad0ad28ec9/volumes" Jan 28 15:25:44 crc kubenswrapper[4893]: I0128 15:25:44.903165 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6a352f2-cd25-4db6-a176-3b588b69090b" path="/var/lib/kubelet/pods/b6a352f2-cd25-4db6-a176-3b588b69090b/volumes" Jan 28 15:25:45 crc kubenswrapper[4893]: I0128 15:25:45.349523 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"aef6e7fe-a1a0-4a6c-9c00-ba875605428b","Type":"ContainerStarted","Data":"75606480fc61ecb85560c328753104bbd1c6f4d2a9da3715865a87e986b59849"} Jan 28 15:25:45 crc kubenswrapper[4893]: I0128 15:25:45.349576 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"aef6e7fe-a1a0-4a6c-9c00-ba875605428b","Type":"ContainerStarted","Data":"f3392203e41a092353a673e5cfd80d2beb0dcb2175bcfcc07fbe6a5df465b61a"} Jan 28 15:25:45 crc kubenswrapper[4893]: I0128 15:25:45.349587 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"aef6e7fe-a1a0-4a6c-9c00-ba875605428b","Type":"ContainerStarted","Data":"b573cb650459f23b55ce61ed2468f3b414b1fde0aa78aea5c83012629493ad17"} Jan 28 15:25:45 crc kubenswrapper[4893]: I0128 15:25:45.351263 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"79781d2a-b011-48ea-a6ef-038161633a26","Type":"ContainerStarted","Data":"6e2b6ceef01087ae6c36c2862df43479e126a2e0a17c3c6e4303587f6ae89a81"} Jan 28 15:25:45 crc kubenswrapper[4893]: I0128 15:25:45.351290 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"79781d2a-b011-48ea-a6ef-038161633a26","Type":"ContainerStarted","Data":"c875e9afafaae49b47ee26abb590701a147bab3af3ac374bd6fd73365949911b"} Jan 28 15:25:45 crc kubenswrapper[4893]: I0128 15:25:45.351303 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"79781d2a-b011-48ea-a6ef-038161633a26","Type":"ContainerStarted","Data":"48bc9df7ca656e80e81d8465c5922c351f7cc3b0e8d875f9affd32ef1bdec7f2"} Jan 28 15:25:45 crc kubenswrapper[4893]: I0128 15:25:45.373288 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.37326834 podStartE2EDuration="2.37326834s" podCreationTimestamp="2026-01-28 15:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:45.365348773 +0000 UTC m=+1463.138963801" watchObservedRunningTime="2026-01-28 15:25:45.37326834 +0000 UTC m=+1463.146883368" Jan 28 15:25:45 crc kubenswrapper[4893]: I0128 15:25:45.390766 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.390739265 podStartE2EDuration="2.390739265s" podCreationTimestamp="2026-01-28 15:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:45.382863819 +0000 UTC m=+1463.156478857" watchObservedRunningTime="2026-01-28 15:25:45.390739265 +0000 UTC m=+1463.164354293" Jan 28 15:25:47 crc kubenswrapper[4893]: I0128 15:25:47.718304 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:49 crc kubenswrapper[4893]: I0128 15:25:49.159920 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:49 crc kubenswrapper[4893]: I0128 15:25:49.160018 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:52 crc kubenswrapper[4893]: I0128 15:25:52.384335 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qkdc4" podUID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerName="registry-server" probeResult="failure" output=< Jan 28 15:25:52 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 28 15:25:52 crc kubenswrapper[4893]: > Jan 28 15:25:52 crc kubenswrapper[4893]: I0128 15:25:52.718546 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:52 crc kubenswrapper[4893]: I0128 15:25:52.750245 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:53 crc kubenswrapper[4893]: I0128 15:25:53.449841 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:25:54 crc kubenswrapper[4893]: I0128 15:25:54.117933 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:54 crc kubenswrapper[4893]: I0128 15:25:54.117994 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:25:54 crc kubenswrapper[4893]: I0128 15:25:54.160309 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:54 crc kubenswrapper[4893]: I0128 15:25:54.160397 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:25:55 crc kubenswrapper[4893]: I0128 15:25:55.201775 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="aef6e7fe-a1a0-4a6c-9c00-ba875605428b" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.136:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:55 crc kubenswrapper[4893]: I0128 15:25:55.203681 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="aef6e7fe-a1a0-4a6c-9c00-ba875605428b" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.136:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:55 crc kubenswrapper[4893]: I0128 15:25:55.284656 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="79781d2a-b011-48ea-a6ef-038161633a26" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.137:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:55 crc kubenswrapper[4893]: I0128 15:25:55.284732 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="79781d2a-b011-48ea-a6ef-038161633a26" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.137:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:26:02 crc kubenswrapper[4893]: I0128 15:26:02.376137 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qkdc4" podUID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerName="registry-server" probeResult="failure" output=< Jan 28 15:26:02 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 28 15:26:02 crc kubenswrapper[4893]: > Jan 28 15:26:04 crc kubenswrapper[4893]: I0128 15:26:04.125652 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:26:04 crc kubenswrapper[4893]: I0128 15:26:04.126527 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:26:04 crc kubenswrapper[4893]: I0128 15:26:04.128072 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:26:04 crc kubenswrapper[4893]: I0128 15:26:04.132344 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:26:04 crc kubenswrapper[4893]: I0128 15:26:04.173092 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:26:04 crc kubenswrapper[4893]: I0128 15:26:04.173586 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:26:04 crc kubenswrapper[4893]: I0128 15:26:04.175382 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:26:04 crc kubenswrapper[4893]: I0128 15:26:04.516087 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:26:04 crc kubenswrapper[4893]: I0128 15:26:04.517938 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:26:04 crc kubenswrapper[4893]: I0128 15:26:04.519868 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:26:12 crc kubenswrapper[4893]: I0128 15:26:12.377030 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qkdc4" podUID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerName="registry-server" probeResult="failure" output=< Jan 28 15:26:12 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 28 15:26:12 crc kubenswrapper[4893]: > Jan 28 15:26:22 crc kubenswrapper[4893]: I0128 15:26:22.371688 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qkdc4" podUID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerName="registry-server" probeResult="failure" output=< Jan 28 15:26:22 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 28 15:26:22 crc kubenswrapper[4893]: > Jan 28 15:26:31 crc kubenswrapper[4893]: I0128 15:26:31.380461 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:26:31 crc kubenswrapper[4893]: I0128 15:26:31.427089 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:26:32 crc kubenswrapper[4893]: I0128 15:26:32.239405 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qkdc4"] Jan 28 15:26:32 crc kubenswrapper[4893]: I0128 15:26:32.782070 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qkdc4" podUID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerName="registry-server" containerID="cri-o://a045cf9543e4813fa13a36c6c7f4950e9c878ee91a268a8f865199067214dfcd" gracePeriod=2 Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.273213 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.394955 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7tjp\" (UniqueName: \"kubernetes.io/projected/a91c49bf-667f-4ec6-a996-6bad3ac2886f-kube-api-access-h7tjp\") pod \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\" (UID: \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\") " Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.395061 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a91c49bf-667f-4ec6-a996-6bad3ac2886f-utilities\") pod \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\" (UID: \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\") " Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.395109 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a91c49bf-667f-4ec6-a996-6bad3ac2886f-catalog-content\") pod \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\" (UID: \"a91c49bf-667f-4ec6-a996-6bad3ac2886f\") " Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.396006 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a91c49bf-667f-4ec6-a996-6bad3ac2886f-utilities" (OuterVolumeSpecName: "utilities") pod "a91c49bf-667f-4ec6-a996-6bad3ac2886f" (UID: "a91c49bf-667f-4ec6-a996-6bad3ac2886f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.405970 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91c49bf-667f-4ec6-a996-6bad3ac2886f-kube-api-access-h7tjp" (OuterVolumeSpecName: "kube-api-access-h7tjp") pod "a91c49bf-667f-4ec6-a996-6bad3ac2886f" (UID: "a91c49bf-667f-4ec6-a996-6bad3ac2886f"). InnerVolumeSpecName "kube-api-access-h7tjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.496671 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7tjp\" (UniqueName: \"kubernetes.io/projected/a91c49bf-667f-4ec6-a996-6bad3ac2886f-kube-api-access-h7tjp\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.496706 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a91c49bf-667f-4ec6-a996-6bad3ac2886f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.505605 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a91c49bf-667f-4ec6-a996-6bad3ac2886f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a91c49bf-667f-4ec6-a996-6bad3ac2886f" (UID: "a91c49bf-667f-4ec6-a996-6bad3ac2886f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.598135 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a91c49bf-667f-4ec6-a996-6bad3ac2886f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.791119 4893 generic.go:334] "Generic (PLEG): container finished" podID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerID="a045cf9543e4813fa13a36c6c7f4950e9c878ee91a268a8f865199067214dfcd" exitCode=0 Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.791308 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkdc4" event={"ID":"a91c49bf-667f-4ec6-a996-6bad3ac2886f","Type":"ContainerDied","Data":"a045cf9543e4813fa13a36c6c7f4950e9c878ee91a268a8f865199067214dfcd"} Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.791914 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkdc4" event={"ID":"a91c49bf-667f-4ec6-a996-6bad3ac2886f","Type":"ContainerDied","Data":"e6be83fc8960f6668990eecf9b8be0a6c1cc8ec0af5019e9dd4efbd1dffdeef9"} Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.791994 4893 scope.go:117] "RemoveContainer" containerID="a045cf9543e4813fa13a36c6c7f4950e9c878ee91a268a8f865199067214dfcd" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.791364 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkdc4" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.830585 4893 scope.go:117] "RemoveContainer" containerID="f374a8a412200772a5cddfca6cac9003ed999280b4366673a65748294c1a4cae" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.837456 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qkdc4"] Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.847461 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qkdc4"] Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.851973 4893 scope.go:117] "RemoveContainer" containerID="5268147d6f66717a7acb2ff88880a9b8662d31b58afdfb09dc74ab1c23bc68b3" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.883165 4893 scope.go:117] "RemoveContainer" containerID="a045cf9543e4813fa13a36c6c7f4950e9c878ee91a268a8f865199067214dfcd" Jan 28 15:26:33 crc kubenswrapper[4893]: E0128 15:26:33.884669 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a045cf9543e4813fa13a36c6c7f4950e9c878ee91a268a8f865199067214dfcd\": container with ID starting with a045cf9543e4813fa13a36c6c7f4950e9c878ee91a268a8f865199067214dfcd not found: ID does not exist" containerID="a045cf9543e4813fa13a36c6c7f4950e9c878ee91a268a8f865199067214dfcd" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.884756 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a045cf9543e4813fa13a36c6c7f4950e9c878ee91a268a8f865199067214dfcd"} err="failed to get container status \"a045cf9543e4813fa13a36c6c7f4950e9c878ee91a268a8f865199067214dfcd\": rpc error: code = NotFound desc = could not find container \"a045cf9543e4813fa13a36c6c7f4950e9c878ee91a268a8f865199067214dfcd\": container with ID starting with a045cf9543e4813fa13a36c6c7f4950e9c878ee91a268a8f865199067214dfcd not found: ID does not exist" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.884793 4893 scope.go:117] "RemoveContainer" containerID="f374a8a412200772a5cddfca6cac9003ed999280b4366673a65748294c1a4cae" Jan 28 15:26:33 crc kubenswrapper[4893]: E0128 15:26:33.885561 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f374a8a412200772a5cddfca6cac9003ed999280b4366673a65748294c1a4cae\": container with ID starting with f374a8a412200772a5cddfca6cac9003ed999280b4366673a65748294c1a4cae not found: ID does not exist" containerID="f374a8a412200772a5cddfca6cac9003ed999280b4366673a65748294c1a4cae" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.885583 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f374a8a412200772a5cddfca6cac9003ed999280b4366673a65748294c1a4cae"} err="failed to get container status \"f374a8a412200772a5cddfca6cac9003ed999280b4366673a65748294c1a4cae\": rpc error: code = NotFound desc = could not find container \"f374a8a412200772a5cddfca6cac9003ed999280b4366673a65748294c1a4cae\": container with ID starting with f374a8a412200772a5cddfca6cac9003ed999280b4366673a65748294c1a4cae not found: ID does not exist" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.885603 4893 scope.go:117] "RemoveContainer" containerID="5268147d6f66717a7acb2ff88880a9b8662d31b58afdfb09dc74ab1c23bc68b3" Jan 28 15:26:33 crc kubenswrapper[4893]: E0128 15:26:33.887410 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5268147d6f66717a7acb2ff88880a9b8662d31b58afdfb09dc74ab1c23bc68b3\": container with ID starting with 5268147d6f66717a7acb2ff88880a9b8662d31b58afdfb09dc74ab1c23bc68b3 not found: ID does not exist" containerID="5268147d6f66717a7acb2ff88880a9b8662d31b58afdfb09dc74ab1c23bc68b3" Jan 28 15:26:33 crc kubenswrapper[4893]: I0128 15:26:33.887446 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5268147d6f66717a7acb2ff88880a9b8662d31b58afdfb09dc74ab1c23bc68b3"} err="failed to get container status \"5268147d6f66717a7acb2ff88880a9b8662d31b58afdfb09dc74ab1c23bc68b3\": rpc error: code = NotFound desc = could not find container \"5268147d6f66717a7acb2ff88880a9b8662d31b58afdfb09dc74ab1c23bc68b3\": container with ID starting with 5268147d6f66717a7acb2ff88880a9b8662d31b58afdfb09dc74ab1c23bc68b3 not found: ID does not exist" Jan 28 15:26:34 crc kubenswrapper[4893]: I0128 15:26:34.902877 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" path="/var/lib/kubelet/pods/a91c49bf-667f-4ec6-a996-6bad3ac2886f/volumes" Jan 28 15:27:29 crc kubenswrapper[4893]: I0128 15:27:29.831521 4893 scope.go:117] "RemoveContainer" containerID="d02d8e76e47cb85e8c3b2d52aa5014e5f34cf38741e43f49171677933749258e" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.276710 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-76m6f"] Jan 28 15:27:58 crc kubenswrapper[4893]: E0128 15:27:58.277672 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerName="registry-server" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.277688 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerName="registry-server" Jan 28 15:27:58 crc kubenswrapper[4893]: E0128 15:27:58.277706 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerName="extract-content" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.277714 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerName="extract-content" Jan 28 15:27:58 crc kubenswrapper[4893]: E0128 15:27:58.277727 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerName="extract-utilities" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.277735 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerName="extract-utilities" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.277932 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91c49bf-667f-4ec6-a996-6bad3ac2886f" containerName="registry-server" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.279350 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.289375 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-76m6f"] Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.416464 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39ef470-13e4-4ee1-b981-51c686424ded-utilities\") pod \"redhat-marketplace-76m6f\" (UID: \"d39ef470-13e4-4ee1-b981-51c686424ded\") " pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.416746 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7xmc\" (UniqueName: \"kubernetes.io/projected/d39ef470-13e4-4ee1-b981-51c686424ded-kube-api-access-c7xmc\") pod \"redhat-marketplace-76m6f\" (UID: \"d39ef470-13e4-4ee1-b981-51c686424ded\") " pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.416975 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39ef470-13e4-4ee1-b981-51c686424ded-catalog-content\") pod \"redhat-marketplace-76m6f\" (UID: \"d39ef470-13e4-4ee1-b981-51c686424ded\") " pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.519126 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39ef470-13e4-4ee1-b981-51c686424ded-utilities\") pod \"redhat-marketplace-76m6f\" (UID: \"d39ef470-13e4-4ee1-b981-51c686424ded\") " pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.519224 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7xmc\" (UniqueName: \"kubernetes.io/projected/d39ef470-13e4-4ee1-b981-51c686424ded-kube-api-access-c7xmc\") pod \"redhat-marketplace-76m6f\" (UID: \"d39ef470-13e4-4ee1-b981-51c686424ded\") " pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.519319 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39ef470-13e4-4ee1-b981-51c686424ded-catalog-content\") pod \"redhat-marketplace-76m6f\" (UID: \"d39ef470-13e4-4ee1-b981-51c686424ded\") " pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.519812 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39ef470-13e4-4ee1-b981-51c686424ded-catalog-content\") pod \"redhat-marketplace-76m6f\" (UID: \"d39ef470-13e4-4ee1-b981-51c686424ded\") " pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.519812 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39ef470-13e4-4ee1-b981-51c686424ded-utilities\") pod \"redhat-marketplace-76m6f\" (UID: \"d39ef470-13e4-4ee1-b981-51c686424ded\") " pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.541981 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7xmc\" (UniqueName: \"kubernetes.io/projected/d39ef470-13e4-4ee1-b981-51c686424ded-kube-api-access-c7xmc\") pod \"redhat-marketplace-76m6f\" (UID: \"d39ef470-13e4-4ee1-b981-51c686424ded\") " pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:27:58 crc kubenswrapper[4893]: I0128 15:27:58.601031 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:27:59 crc kubenswrapper[4893]: I0128 15:27:59.093580 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-76m6f"] Jan 28 15:27:59 crc kubenswrapper[4893]: I0128 15:27:59.482650 4893 generic.go:334] "Generic (PLEG): container finished" podID="d39ef470-13e4-4ee1-b981-51c686424ded" containerID="d267ce9151606f29865c8c77a16d0b6d21b5f07652ae2c65ca5a840cc3f3b831" exitCode=0 Jan 28 15:27:59 crc kubenswrapper[4893]: I0128 15:27:59.482693 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76m6f" event={"ID":"d39ef470-13e4-4ee1-b981-51c686424ded","Type":"ContainerDied","Data":"d267ce9151606f29865c8c77a16d0b6d21b5f07652ae2c65ca5a840cc3f3b831"} Jan 28 15:27:59 crc kubenswrapper[4893]: I0128 15:27:59.482720 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76m6f" event={"ID":"d39ef470-13e4-4ee1-b981-51c686424ded","Type":"ContainerStarted","Data":"07a4f2ab1ea3cfaef92659b974942c02054988f782a7d3616fe9afaef942b7fe"} Jan 28 15:27:59 crc kubenswrapper[4893]: I0128 15:27:59.484182 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:28:01 crc kubenswrapper[4893]: I0128 15:28:01.498694 4893 generic.go:334] "Generic (PLEG): container finished" podID="d39ef470-13e4-4ee1-b981-51c686424ded" containerID="a48659b885016b106aa9cf9578cc5aca8db2c0e4e666b41a5b9df96ec438df60" exitCode=0 Jan 28 15:28:01 crc kubenswrapper[4893]: I0128 15:28:01.498896 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76m6f" event={"ID":"d39ef470-13e4-4ee1-b981-51c686424ded","Type":"ContainerDied","Data":"a48659b885016b106aa9cf9578cc5aca8db2c0e4e666b41a5b9df96ec438df60"} Jan 28 15:28:02 crc kubenswrapper[4893]: I0128 15:28:02.510959 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76m6f" event={"ID":"d39ef470-13e4-4ee1-b981-51c686424ded","Type":"ContainerStarted","Data":"6fa82fae1b935e4942e0e50f5ea28d73536452dfd8caf2a5fb21ed38bd6d2b46"} Jan 28 15:28:02 crc kubenswrapper[4893]: I0128 15:28:02.537394 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-76m6f" podStartSLOduration=1.886797117 podStartE2EDuration="4.537371263s" podCreationTimestamp="2026-01-28 15:27:58 +0000 UTC" firstStartedPulling="2026-01-28 15:27:59.483986441 +0000 UTC m=+1597.257601469" lastFinishedPulling="2026-01-28 15:28:02.134560587 +0000 UTC m=+1599.908175615" observedRunningTime="2026-01-28 15:28:02.530309031 +0000 UTC m=+1600.303924059" watchObservedRunningTime="2026-01-28 15:28:02.537371263 +0000 UTC m=+1600.310986301" Jan 28 15:28:05 crc kubenswrapper[4893]: I0128 15:28:05.722165 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:28:05 crc kubenswrapper[4893]: I0128 15:28:05.723362 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:28:08 crc kubenswrapper[4893]: I0128 15:28:08.601262 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:28:08 crc kubenswrapper[4893]: I0128 15:28:08.601749 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:28:08 crc kubenswrapper[4893]: I0128 15:28:08.652379 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:28:09 crc kubenswrapper[4893]: I0128 15:28:09.608322 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:28:09 crc kubenswrapper[4893]: I0128 15:28:09.662206 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-76m6f"] Jan 28 15:28:11 crc kubenswrapper[4893]: I0128 15:28:11.576279 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-76m6f" podUID="d39ef470-13e4-4ee1-b981-51c686424ded" containerName="registry-server" containerID="cri-o://6fa82fae1b935e4942e0e50f5ea28d73536452dfd8caf2a5fb21ed38bd6d2b46" gracePeriod=2 Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.114153 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.164509 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7xmc\" (UniqueName: \"kubernetes.io/projected/d39ef470-13e4-4ee1-b981-51c686424ded-kube-api-access-c7xmc\") pod \"d39ef470-13e4-4ee1-b981-51c686424ded\" (UID: \"d39ef470-13e4-4ee1-b981-51c686424ded\") " Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.164664 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39ef470-13e4-4ee1-b981-51c686424ded-utilities\") pod \"d39ef470-13e4-4ee1-b981-51c686424ded\" (UID: \"d39ef470-13e4-4ee1-b981-51c686424ded\") " Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.164762 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39ef470-13e4-4ee1-b981-51c686424ded-catalog-content\") pod \"d39ef470-13e4-4ee1-b981-51c686424ded\" (UID: \"d39ef470-13e4-4ee1-b981-51c686424ded\") " Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.166094 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d39ef470-13e4-4ee1-b981-51c686424ded-utilities" (OuterVolumeSpecName: "utilities") pod "d39ef470-13e4-4ee1-b981-51c686424ded" (UID: "d39ef470-13e4-4ee1-b981-51c686424ded"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.167585 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d39ef470-13e4-4ee1-b981-51c686424ded-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.173393 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39ef470-13e4-4ee1-b981-51c686424ded-kube-api-access-c7xmc" (OuterVolumeSpecName: "kube-api-access-c7xmc") pod "d39ef470-13e4-4ee1-b981-51c686424ded" (UID: "d39ef470-13e4-4ee1-b981-51c686424ded"). InnerVolumeSpecName "kube-api-access-c7xmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.188526 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d39ef470-13e4-4ee1-b981-51c686424ded-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d39ef470-13e4-4ee1-b981-51c686424ded" (UID: "d39ef470-13e4-4ee1-b981-51c686424ded"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.269045 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7xmc\" (UniqueName: \"kubernetes.io/projected/d39ef470-13e4-4ee1-b981-51c686424ded-kube-api-access-c7xmc\") on node \"crc\" DevicePath \"\"" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.269084 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d39ef470-13e4-4ee1-b981-51c686424ded-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.585191 4893 generic.go:334] "Generic (PLEG): container finished" podID="d39ef470-13e4-4ee1-b981-51c686424ded" containerID="6fa82fae1b935e4942e0e50f5ea28d73536452dfd8caf2a5fb21ed38bd6d2b46" exitCode=0 Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.585241 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76m6f" event={"ID":"d39ef470-13e4-4ee1-b981-51c686424ded","Type":"ContainerDied","Data":"6fa82fae1b935e4942e0e50f5ea28d73536452dfd8caf2a5fb21ed38bd6d2b46"} Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.585271 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76m6f" event={"ID":"d39ef470-13e4-4ee1-b981-51c686424ded","Type":"ContainerDied","Data":"07a4f2ab1ea3cfaef92659b974942c02054988f782a7d3616fe9afaef942b7fe"} Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.585290 4893 scope.go:117] "RemoveContainer" containerID="6fa82fae1b935e4942e0e50f5ea28d73536452dfd8caf2a5fb21ed38bd6d2b46" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.585413 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76m6f" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.622806 4893 scope.go:117] "RemoveContainer" containerID="a48659b885016b106aa9cf9578cc5aca8db2c0e4e666b41a5b9df96ec438df60" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.625020 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-76m6f"] Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.628191 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-76m6f"] Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.646128 4893 scope.go:117] "RemoveContainer" containerID="d267ce9151606f29865c8c77a16d0b6d21b5f07652ae2c65ca5a840cc3f3b831" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.678873 4893 scope.go:117] "RemoveContainer" containerID="6fa82fae1b935e4942e0e50f5ea28d73536452dfd8caf2a5fb21ed38bd6d2b46" Jan 28 15:28:12 crc kubenswrapper[4893]: E0128 15:28:12.679302 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fa82fae1b935e4942e0e50f5ea28d73536452dfd8caf2a5fb21ed38bd6d2b46\": container with ID starting with 6fa82fae1b935e4942e0e50f5ea28d73536452dfd8caf2a5fb21ed38bd6d2b46 not found: ID does not exist" containerID="6fa82fae1b935e4942e0e50f5ea28d73536452dfd8caf2a5fb21ed38bd6d2b46" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.679343 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa82fae1b935e4942e0e50f5ea28d73536452dfd8caf2a5fb21ed38bd6d2b46"} err="failed to get container status \"6fa82fae1b935e4942e0e50f5ea28d73536452dfd8caf2a5fb21ed38bd6d2b46\": rpc error: code = NotFound desc = could not find container \"6fa82fae1b935e4942e0e50f5ea28d73536452dfd8caf2a5fb21ed38bd6d2b46\": container with ID starting with 6fa82fae1b935e4942e0e50f5ea28d73536452dfd8caf2a5fb21ed38bd6d2b46 not found: ID does not exist" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.679368 4893 scope.go:117] "RemoveContainer" containerID="a48659b885016b106aa9cf9578cc5aca8db2c0e4e666b41a5b9df96ec438df60" Jan 28 15:28:12 crc kubenswrapper[4893]: E0128 15:28:12.680001 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a48659b885016b106aa9cf9578cc5aca8db2c0e4e666b41a5b9df96ec438df60\": container with ID starting with a48659b885016b106aa9cf9578cc5aca8db2c0e4e666b41a5b9df96ec438df60 not found: ID does not exist" containerID="a48659b885016b106aa9cf9578cc5aca8db2c0e4e666b41a5b9df96ec438df60" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.680051 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a48659b885016b106aa9cf9578cc5aca8db2c0e4e666b41a5b9df96ec438df60"} err="failed to get container status \"a48659b885016b106aa9cf9578cc5aca8db2c0e4e666b41a5b9df96ec438df60\": rpc error: code = NotFound desc = could not find container \"a48659b885016b106aa9cf9578cc5aca8db2c0e4e666b41a5b9df96ec438df60\": container with ID starting with a48659b885016b106aa9cf9578cc5aca8db2c0e4e666b41a5b9df96ec438df60 not found: ID does not exist" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.680085 4893 scope.go:117] "RemoveContainer" containerID="d267ce9151606f29865c8c77a16d0b6d21b5f07652ae2c65ca5a840cc3f3b831" Jan 28 15:28:12 crc kubenswrapper[4893]: E0128 15:28:12.680380 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d267ce9151606f29865c8c77a16d0b6d21b5f07652ae2c65ca5a840cc3f3b831\": container with ID starting with d267ce9151606f29865c8c77a16d0b6d21b5f07652ae2c65ca5a840cc3f3b831 not found: ID does not exist" containerID="d267ce9151606f29865c8c77a16d0b6d21b5f07652ae2c65ca5a840cc3f3b831" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.680410 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d267ce9151606f29865c8c77a16d0b6d21b5f07652ae2c65ca5a840cc3f3b831"} err="failed to get container status \"d267ce9151606f29865c8c77a16d0b6d21b5f07652ae2c65ca5a840cc3f3b831\": rpc error: code = NotFound desc = could not find container \"d267ce9151606f29865c8c77a16d0b6d21b5f07652ae2c65ca5a840cc3f3b831\": container with ID starting with d267ce9151606f29865c8c77a16d0b6d21b5f07652ae2c65ca5a840cc3f3b831 not found: ID does not exist" Jan 28 15:28:12 crc kubenswrapper[4893]: I0128 15:28:12.901397 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d39ef470-13e4-4ee1-b981-51c686424ded" path="/var/lib/kubelet/pods/d39ef470-13e4-4ee1-b981-51c686424ded/volumes" Jan 28 15:28:29 crc kubenswrapper[4893]: I0128 15:28:29.911097 4893 scope.go:117] "RemoveContainer" containerID="b1e08cc8aaeb9cef6f269a8af9986252ca87e9c362be8ca3dad63bb61ca2a7a4" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.103752 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rk7kq"] Jan 28 15:28:34 crc kubenswrapper[4893]: E0128 15:28:34.104635 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39ef470-13e4-4ee1-b981-51c686424ded" containerName="extract-utilities" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.104651 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39ef470-13e4-4ee1-b981-51c686424ded" containerName="extract-utilities" Jan 28 15:28:34 crc kubenswrapper[4893]: E0128 15:28:34.104677 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39ef470-13e4-4ee1-b981-51c686424ded" containerName="registry-server" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.104686 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39ef470-13e4-4ee1-b981-51c686424ded" containerName="registry-server" Jan 28 15:28:34 crc kubenswrapper[4893]: E0128 15:28:34.104711 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39ef470-13e4-4ee1-b981-51c686424ded" containerName="extract-content" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.104719 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39ef470-13e4-4ee1-b981-51c686424ded" containerName="extract-content" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.104977 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39ef470-13e4-4ee1-b981-51c686424ded" containerName="registry-server" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.106500 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.116250 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rk7kq"] Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.200433 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c592a8d-fc28-4808-9752-99c79e40aabd-utilities\") pod \"community-operators-rk7kq\" (UID: \"7c592a8d-fc28-4808-9752-99c79e40aabd\") " pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.200534 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c592a8d-fc28-4808-9752-99c79e40aabd-catalog-content\") pod \"community-operators-rk7kq\" (UID: \"7c592a8d-fc28-4808-9752-99c79e40aabd\") " pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.200659 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqsx\" (UniqueName: \"kubernetes.io/projected/7c592a8d-fc28-4808-9752-99c79e40aabd-kube-api-access-mjqsx\") pod \"community-operators-rk7kq\" (UID: \"7c592a8d-fc28-4808-9752-99c79e40aabd\") " pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.301960 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjqsx\" (UniqueName: \"kubernetes.io/projected/7c592a8d-fc28-4808-9752-99c79e40aabd-kube-api-access-mjqsx\") pod \"community-operators-rk7kq\" (UID: \"7c592a8d-fc28-4808-9752-99c79e40aabd\") " pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.302109 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c592a8d-fc28-4808-9752-99c79e40aabd-utilities\") pod \"community-operators-rk7kq\" (UID: \"7c592a8d-fc28-4808-9752-99c79e40aabd\") " pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.302162 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c592a8d-fc28-4808-9752-99c79e40aabd-catalog-content\") pod \"community-operators-rk7kq\" (UID: \"7c592a8d-fc28-4808-9752-99c79e40aabd\") " pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.302696 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c592a8d-fc28-4808-9752-99c79e40aabd-catalog-content\") pod \"community-operators-rk7kq\" (UID: \"7c592a8d-fc28-4808-9752-99c79e40aabd\") " pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.303361 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c592a8d-fc28-4808-9752-99c79e40aabd-utilities\") pod \"community-operators-rk7kq\" (UID: \"7c592a8d-fc28-4808-9752-99c79e40aabd\") " pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.324848 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjqsx\" (UniqueName: \"kubernetes.io/projected/7c592a8d-fc28-4808-9752-99c79e40aabd-kube-api-access-mjqsx\") pod \"community-operators-rk7kq\" (UID: \"7c592a8d-fc28-4808-9752-99c79e40aabd\") " pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.427060 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:34 crc kubenswrapper[4893]: I0128 15:28:34.873071 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rk7kq"] Jan 28 15:28:35 crc kubenswrapper[4893]: I0128 15:28:35.722522 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:28:35 crc kubenswrapper[4893]: I0128 15:28:35.722882 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:28:35 crc kubenswrapper[4893]: I0128 15:28:35.777018 4893 generic.go:334] "Generic (PLEG): container finished" podID="7c592a8d-fc28-4808-9752-99c79e40aabd" containerID="e854c998cb03bd040fa228dfba07cf69216dad9b00d8b86732a61b8c402642f5" exitCode=0 Jan 28 15:28:35 crc kubenswrapper[4893]: I0128 15:28:35.777070 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk7kq" event={"ID":"7c592a8d-fc28-4808-9752-99c79e40aabd","Type":"ContainerDied","Data":"e854c998cb03bd040fa228dfba07cf69216dad9b00d8b86732a61b8c402642f5"} Jan 28 15:28:35 crc kubenswrapper[4893]: I0128 15:28:35.777101 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk7kq" event={"ID":"7c592a8d-fc28-4808-9752-99c79e40aabd","Type":"ContainerStarted","Data":"d39d8bf2c30c8ad1ed7f7b48db734382f26e070f002a3c41169837960a5023a6"} Jan 28 15:28:36 crc kubenswrapper[4893]: I0128 15:28:36.786606 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk7kq" event={"ID":"7c592a8d-fc28-4808-9752-99c79e40aabd","Type":"ContainerStarted","Data":"291bfc16f75d818362a9619ef16ea7fae67643665629678c520dd190bea8d74d"} Jan 28 15:28:37 crc kubenswrapper[4893]: I0128 15:28:37.803985 4893 generic.go:334] "Generic (PLEG): container finished" podID="7c592a8d-fc28-4808-9752-99c79e40aabd" containerID="291bfc16f75d818362a9619ef16ea7fae67643665629678c520dd190bea8d74d" exitCode=0 Jan 28 15:28:37 crc kubenswrapper[4893]: I0128 15:28:37.804370 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk7kq" event={"ID":"7c592a8d-fc28-4808-9752-99c79e40aabd","Type":"ContainerDied","Data":"291bfc16f75d818362a9619ef16ea7fae67643665629678c520dd190bea8d74d"} Jan 28 15:28:38 crc kubenswrapper[4893]: I0128 15:28:38.814396 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk7kq" event={"ID":"7c592a8d-fc28-4808-9752-99c79e40aabd","Type":"ContainerStarted","Data":"9f2bfa9ba1b93562b7470f4e5573a14a55292beef626fcd82b7ea8f353101a7e"} Jan 28 15:28:38 crc kubenswrapper[4893]: I0128 15:28:38.839596 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rk7kq" podStartSLOduration=2.2575185859999998 podStartE2EDuration="4.839576184s" podCreationTimestamp="2026-01-28 15:28:34 +0000 UTC" firstStartedPulling="2026-01-28 15:28:35.778545524 +0000 UTC m=+1633.552160552" lastFinishedPulling="2026-01-28 15:28:38.360603122 +0000 UTC m=+1636.134218150" observedRunningTime="2026-01-28 15:28:38.831054842 +0000 UTC m=+1636.604669870" watchObservedRunningTime="2026-01-28 15:28:38.839576184 +0000 UTC m=+1636.613191212" Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.427829 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.428385 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.488322 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.734052 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9wp9g"] Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.736328 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.759058 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wp9g"] Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.796775 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-catalog-content\") pod \"certified-operators-9wp9g\" (UID: \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\") " pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.796840 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk4h4\" (UniqueName: \"kubernetes.io/projected/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-kube-api-access-qk4h4\") pod \"certified-operators-9wp9g\" (UID: \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\") " pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.796890 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-utilities\") pod \"certified-operators-9wp9g\" (UID: \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\") " pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.900181 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-catalog-content\") pod \"certified-operators-9wp9g\" (UID: \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\") " pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.900449 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk4h4\" (UniqueName: \"kubernetes.io/projected/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-kube-api-access-qk4h4\") pod \"certified-operators-9wp9g\" (UID: \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\") " pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.900563 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-utilities\") pod \"certified-operators-9wp9g\" (UID: \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\") " pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.900964 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-catalog-content\") pod \"certified-operators-9wp9g\" (UID: \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\") " pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.901106 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-utilities\") pod \"certified-operators-9wp9g\" (UID: \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\") " pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.925593 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk4h4\" (UniqueName: \"kubernetes.io/projected/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-kube-api-access-qk4h4\") pod \"certified-operators-9wp9g\" (UID: \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\") " pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:44 crc kubenswrapper[4893]: I0128 15:28:44.940001 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:45 crc kubenswrapper[4893]: I0128 15:28:45.061968 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:45 crc kubenswrapper[4893]: I0128 15:28:45.568897 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9wp9g"] Jan 28 15:28:45 crc kubenswrapper[4893]: W0128 15:28:45.571561 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccdd342a_4f5b_4e5e_adf7_0884eaf53220.slice/crio-d967ce1d84cbfbf130077a7453e06a0881029cefd980c3af39842e2ded1c7ef0 WatchSource:0}: Error finding container d967ce1d84cbfbf130077a7453e06a0881029cefd980c3af39842e2ded1c7ef0: Status 404 returned error can't find the container with id d967ce1d84cbfbf130077a7453e06a0881029cefd980c3af39842e2ded1c7ef0 Jan 28 15:28:45 crc kubenswrapper[4893]: I0128 15:28:45.869345 4893 generic.go:334] "Generic (PLEG): container finished" podID="ccdd342a-4f5b-4e5e-adf7-0884eaf53220" containerID="c15eb4b0bfb9749c0070e834cd2e6d33bdb373277de7e73264f78dfda18c3f91" exitCode=0 Jan 28 15:28:45 crc kubenswrapper[4893]: I0128 15:28:45.869401 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wp9g" event={"ID":"ccdd342a-4f5b-4e5e-adf7-0884eaf53220","Type":"ContainerDied","Data":"c15eb4b0bfb9749c0070e834cd2e6d33bdb373277de7e73264f78dfda18c3f91"} Jan 28 15:28:45 crc kubenswrapper[4893]: I0128 15:28:45.869444 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wp9g" event={"ID":"ccdd342a-4f5b-4e5e-adf7-0884eaf53220","Type":"ContainerStarted","Data":"d967ce1d84cbfbf130077a7453e06a0881029cefd980c3af39842e2ded1c7ef0"} Jan 28 15:28:46 crc kubenswrapper[4893]: I0128 15:28:46.877813 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wp9g" event={"ID":"ccdd342a-4f5b-4e5e-adf7-0884eaf53220","Type":"ContainerStarted","Data":"e46f74a87a6d653b728d677939ec52fccc3f117a41b40d42c9b9467c11380c55"} Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.330542 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rk7kq"] Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.331060 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rk7kq" podUID="7c592a8d-fc28-4808-9752-99c79e40aabd" containerName="registry-server" containerID="cri-o://9f2bfa9ba1b93562b7470f4e5573a14a55292beef626fcd82b7ea8f353101a7e" gracePeriod=2 Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.769371 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.872451 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjqsx\" (UniqueName: \"kubernetes.io/projected/7c592a8d-fc28-4808-9752-99c79e40aabd-kube-api-access-mjqsx\") pod \"7c592a8d-fc28-4808-9752-99c79e40aabd\" (UID: \"7c592a8d-fc28-4808-9752-99c79e40aabd\") " Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.872572 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c592a8d-fc28-4808-9752-99c79e40aabd-utilities\") pod \"7c592a8d-fc28-4808-9752-99c79e40aabd\" (UID: \"7c592a8d-fc28-4808-9752-99c79e40aabd\") " Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.872647 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c592a8d-fc28-4808-9752-99c79e40aabd-catalog-content\") pod \"7c592a8d-fc28-4808-9752-99c79e40aabd\" (UID: \"7c592a8d-fc28-4808-9752-99c79e40aabd\") " Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.873553 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c592a8d-fc28-4808-9752-99c79e40aabd-utilities" (OuterVolumeSpecName: "utilities") pod "7c592a8d-fc28-4808-9752-99c79e40aabd" (UID: "7c592a8d-fc28-4808-9752-99c79e40aabd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.880232 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c592a8d-fc28-4808-9752-99c79e40aabd-kube-api-access-mjqsx" (OuterVolumeSpecName: "kube-api-access-mjqsx") pod "7c592a8d-fc28-4808-9752-99c79e40aabd" (UID: "7c592a8d-fc28-4808-9752-99c79e40aabd"). InnerVolumeSpecName "kube-api-access-mjqsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.891290 4893 generic.go:334] "Generic (PLEG): container finished" podID="ccdd342a-4f5b-4e5e-adf7-0884eaf53220" containerID="e46f74a87a6d653b728d677939ec52fccc3f117a41b40d42c9b9467c11380c55" exitCode=0 Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.891355 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wp9g" event={"ID":"ccdd342a-4f5b-4e5e-adf7-0884eaf53220","Type":"ContainerDied","Data":"e46f74a87a6d653b728d677939ec52fccc3f117a41b40d42c9b9467c11380c55"} Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.896034 4893 generic.go:334] "Generic (PLEG): container finished" podID="7c592a8d-fc28-4808-9752-99c79e40aabd" containerID="9f2bfa9ba1b93562b7470f4e5573a14a55292beef626fcd82b7ea8f353101a7e" exitCode=0 Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.896083 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk7kq" event={"ID":"7c592a8d-fc28-4808-9752-99c79e40aabd","Type":"ContainerDied","Data":"9f2bfa9ba1b93562b7470f4e5573a14a55292beef626fcd82b7ea8f353101a7e"} Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.896105 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rk7kq" event={"ID":"7c592a8d-fc28-4808-9752-99c79e40aabd","Type":"ContainerDied","Data":"d39d8bf2c30c8ad1ed7f7b48db734382f26e070f002a3c41169837960a5023a6"} Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.896122 4893 scope.go:117] "RemoveContainer" containerID="9f2bfa9ba1b93562b7470f4e5573a14a55292beef626fcd82b7ea8f353101a7e" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.896404 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rk7kq" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.925585 4893 scope.go:117] "RemoveContainer" containerID="291bfc16f75d818362a9619ef16ea7fae67643665629678c520dd190bea8d74d" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.949043 4893 scope.go:117] "RemoveContainer" containerID="e854c998cb03bd040fa228dfba07cf69216dad9b00d8b86732a61b8c402642f5" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.951352 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c592a8d-fc28-4808-9752-99c79e40aabd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c592a8d-fc28-4808-9752-99c79e40aabd" (UID: "7c592a8d-fc28-4808-9752-99c79e40aabd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.974953 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjqsx\" (UniqueName: \"kubernetes.io/projected/7c592a8d-fc28-4808-9752-99c79e40aabd-kube-api-access-mjqsx\") on node \"crc\" DevicePath \"\"" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.974987 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c592a8d-fc28-4808-9752-99c79e40aabd-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.974998 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c592a8d-fc28-4808-9752-99c79e40aabd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.980880 4893 scope.go:117] "RemoveContainer" containerID="9f2bfa9ba1b93562b7470f4e5573a14a55292beef626fcd82b7ea8f353101a7e" Jan 28 15:28:47 crc kubenswrapper[4893]: E0128 15:28:47.981347 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f2bfa9ba1b93562b7470f4e5573a14a55292beef626fcd82b7ea8f353101a7e\": container with ID starting with 9f2bfa9ba1b93562b7470f4e5573a14a55292beef626fcd82b7ea8f353101a7e not found: ID does not exist" containerID="9f2bfa9ba1b93562b7470f4e5573a14a55292beef626fcd82b7ea8f353101a7e" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.981388 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2bfa9ba1b93562b7470f4e5573a14a55292beef626fcd82b7ea8f353101a7e"} err="failed to get container status \"9f2bfa9ba1b93562b7470f4e5573a14a55292beef626fcd82b7ea8f353101a7e\": rpc error: code = NotFound desc = could not find container \"9f2bfa9ba1b93562b7470f4e5573a14a55292beef626fcd82b7ea8f353101a7e\": container with ID starting with 9f2bfa9ba1b93562b7470f4e5573a14a55292beef626fcd82b7ea8f353101a7e not found: ID does not exist" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.981414 4893 scope.go:117] "RemoveContainer" containerID="291bfc16f75d818362a9619ef16ea7fae67643665629678c520dd190bea8d74d" Jan 28 15:28:47 crc kubenswrapper[4893]: E0128 15:28:47.981845 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"291bfc16f75d818362a9619ef16ea7fae67643665629678c520dd190bea8d74d\": container with ID starting with 291bfc16f75d818362a9619ef16ea7fae67643665629678c520dd190bea8d74d not found: ID does not exist" containerID="291bfc16f75d818362a9619ef16ea7fae67643665629678c520dd190bea8d74d" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.981880 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"291bfc16f75d818362a9619ef16ea7fae67643665629678c520dd190bea8d74d"} err="failed to get container status \"291bfc16f75d818362a9619ef16ea7fae67643665629678c520dd190bea8d74d\": rpc error: code = NotFound desc = could not find container \"291bfc16f75d818362a9619ef16ea7fae67643665629678c520dd190bea8d74d\": container with ID starting with 291bfc16f75d818362a9619ef16ea7fae67643665629678c520dd190bea8d74d not found: ID does not exist" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.981897 4893 scope.go:117] "RemoveContainer" containerID="e854c998cb03bd040fa228dfba07cf69216dad9b00d8b86732a61b8c402642f5" Jan 28 15:28:47 crc kubenswrapper[4893]: E0128 15:28:47.982167 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e854c998cb03bd040fa228dfba07cf69216dad9b00d8b86732a61b8c402642f5\": container with ID starting with e854c998cb03bd040fa228dfba07cf69216dad9b00d8b86732a61b8c402642f5 not found: ID does not exist" containerID="e854c998cb03bd040fa228dfba07cf69216dad9b00d8b86732a61b8c402642f5" Jan 28 15:28:47 crc kubenswrapper[4893]: I0128 15:28:47.982205 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e854c998cb03bd040fa228dfba07cf69216dad9b00d8b86732a61b8c402642f5"} err="failed to get container status \"e854c998cb03bd040fa228dfba07cf69216dad9b00d8b86732a61b8c402642f5\": rpc error: code = NotFound desc = could not find container \"e854c998cb03bd040fa228dfba07cf69216dad9b00d8b86732a61b8c402642f5\": container with ID starting with e854c998cb03bd040fa228dfba07cf69216dad9b00d8b86732a61b8c402642f5 not found: ID does not exist" Jan 28 15:28:48 crc kubenswrapper[4893]: I0128 15:28:48.232430 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rk7kq"] Jan 28 15:28:48 crc kubenswrapper[4893]: I0128 15:28:48.241271 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rk7kq"] Jan 28 15:28:48 crc kubenswrapper[4893]: I0128 15:28:48.908844 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c592a8d-fc28-4808-9752-99c79e40aabd" path="/var/lib/kubelet/pods/7c592a8d-fc28-4808-9752-99c79e40aabd/volumes" Jan 28 15:28:48 crc kubenswrapper[4893]: I0128 15:28:48.912721 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wp9g" event={"ID":"ccdd342a-4f5b-4e5e-adf7-0884eaf53220","Type":"ContainerStarted","Data":"3928c18638bce182ad954ac7350c526448951ade6ef06f7602af421efe37460d"} Jan 28 15:28:48 crc kubenswrapper[4893]: I0128 15:28:48.937618 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9wp9g" podStartSLOduration=2.484464918 podStartE2EDuration="4.937597819s" podCreationTimestamp="2026-01-28 15:28:44 +0000 UTC" firstStartedPulling="2026-01-28 15:28:45.870879485 +0000 UTC m=+1643.644494513" lastFinishedPulling="2026-01-28 15:28:48.324012386 +0000 UTC m=+1646.097627414" observedRunningTime="2026-01-28 15:28:48.929970872 +0000 UTC m=+1646.703585900" watchObservedRunningTime="2026-01-28 15:28:48.937597819 +0000 UTC m=+1646.711212847" Jan 28 15:28:55 crc kubenswrapper[4893]: I0128 15:28:55.063270 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:55 crc kubenswrapper[4893]: I0128 15:28:55.063851 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:55 crc kubenswrapper[4893]: I0128 15:28:55.108024 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:56 crc kubenswrapper[4893]: I0128 15:28:56.004659 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:56 crc kubenswrapper[4893]: I0128 15:28:56.047982 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9wp9g"] Jan 28 15:28:57 crc kubenswrapper[4893]: I0128 15:28:57.975587 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9wp9g" podUID="ccdd342a-4f5b-4e5e-adf7-0884eaf53220" containerName="registry-server" containerID="cri-o://3928c18638bce182ad954ac7350c526448951ade6ef06f7602af421efe37460d" gracePeriod=2 Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.374989 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.453283 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-utilities\") pod \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\" (UID: \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\") " Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.454625 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk4h4\" (UniqueName: \"kubernetes.io/projected/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-kube-api-access-qk4h4\") pod \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\" (UID: \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\") " Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.454835 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-catalog-content\") pod \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\" (UID: \"ccdd342a-4f5b-4e5e-adf7-0884eaf53220\") " Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.455723 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-utilities" (OuterVolumeSpecName: "utilities") pod "ccdd342a-4f5b-4e5e-adf7-0884eaf53220" (UID: "ccdd342a-4f5b-4e5e-adf7-0884eaf53220"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.456754 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.461140 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-kube-api-access-qk4h4" (OuterVolumeSpecName: "kube-api-access-qk4h4") pod "ccdd342a-4f5b-4e5e-adf7-0884eaf53220" (UID: "ccdd342a-4f5b-4e5e-adf7-0884eaf53220"). InnerVolumeSpecName "kube-api-access-qk4h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.508100 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccdd342a-4f5b-4e5e-adf7-0884eaf53220" (UID: "ccdd342a-4f5b-4e5e-adf7-0884eaf53220"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.558088 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk4h4\" (UniqueName: \"kubernetes.io/projected/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-kube-api-access-qk4h4\") on node \"crc\" DevicePath \"\"" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.558139 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccdd342a-4f5b-4e5e-adf7-0884eaf53220-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.799325 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk"] Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.808451 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k"] Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.817710 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-9fm7k"] Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.827909 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-v4zxk"] Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.902615 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de68770-7e4d-4fcb-98de-79c995444045" path="/var/lib/kubelet/pods/3de68770-7e4d-4fcb-98de-79c995444045/volumes" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.903435 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c" path="/var/lib/kubelet/pods/ccb6f9dc-4e92-4fb7-8ac7-fa95257dee6c/volumes" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.908820 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell1ce07-account-delete-qw4m6"] Jan 28 15:28:58 crc kubenswrapper[4893]: E0128 15:28:58.909538 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdd342a-4f5b-4e5e-adf7-0884eaf53220" containerName="extract-utilities" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.909649 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdd342a-4f5b-4e5e-adf7-0884eaf53220" containerName="extract-utilities" Jan 28 15:28:58 crc kubenswrapper[4893]: E0128 15:28:58.909723 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdd342a-4f5b-4e5e-adf7-0884eaf53220" containerName="registry-server" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.910324 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdd342a-4f5b-4e5e-adf7-0884eaf53220" containerName="registry-server" Jan 28 15:28:58 crc kubenswrapper[4893]: E0128 15:28:58.910435 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c592a8d-fc28-4808-9752-99c79e40aabd" containerName="extract-utilities" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.910538 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c592a8d-fc28-4808-9752-99c79e40aabd" containerName="extract-utilities" Jan 28 15:28:58 crc kubenswrapper[4893]: E0128 15:28:58.910638 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c592a8d-fc28-4808-9752-99c79e40aabd" containerName="extract-content" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.910715 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c592a8d-fc28-4808-9752-99c79e40aabd" containerName="extract-content" Jan 28 15:28:58 crc kubenswrapper[4893]: E0128 15:28:58.910798 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c592a8d-fc28-4808-9752-99c79e40aabd" containerName="registry-server" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.910865 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c592a8d-fc28-4808-9752-99c79e40aabd" containerName="registry-server" Jan 28 15:28:58 crc kubenswrapper[4893]: E0128 15:28:58.910934 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccdd342a-4f5b-4e5e-adf7-0884eaf53220" containerName="extract-content" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.910995 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccdd342a-4f5b-4e5e-adf7-0884eaf53220" containerName="extract-content" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.911357 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c592a8d-fc28-4808-9752-99c79e40aabd" containerName="registry-server" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.911466 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccdd342a-4f5b-4e5e-adf7-0884eaf53220" containerName="registry-server" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.912367 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1ce07-account-delete-qw4m6" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.920973 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1ce07-account-delete-qw4m6"] Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.992900 4893 generic.go:334] "Generic (PLEG): container finished" podID="ccdd342a-4f5b-4e5e-adf7-0884eaf53220" containerID="3928c18638bce182ad954ac7350c526448951ade6ef06f7602af421efe37460d" exitCode=0 Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.993895 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9wp9g" Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.993920 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wp9g" event={"ID":"ccdd342a-4f5b-4e5e-adf7-0884eaf53220","Type":"ContainerDied","Data":"3928c18638bce182ad954ac7350c526448951ade6ef06f7602af421efe37460d"} Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.995051 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9wp9g" event={"ID":"ccdd342a-4f5b-4e5e-adf7-0884eaf53220","Type":"ContainerDied","Data":"d967ce1d84cbfbf130077a7453e06a0881029cefd980c3af39842e2ded1c7ef0"} Jan 28 15:28:58 crc kubenswrapper[4893]: I0128 15:28:58.995113 4893 scope.go:117] "RemoveContainer" containerID="3928c18638bce182ad954ac7350c526448951ade6ef06f7602af421efe37460d" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.005533 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.005804 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="79781d2a-b011-48ea-a6ef-038161633a26" containerName="nova-kuttl-metadata-log" containerID="cri-o://c875e9afafaae49b47ee26abb590701a147bab3af3ac374bd6fd73365949911b" gracePeriod=30 Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.005954 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="79781d2a-b011-48ea-a6ef-038161633a26" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://6e2b6ceef01087ae6c36c2862df43479e126a2e0a17c3c6e4303587f6ae89a81" gracePeriod=30 Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.016888 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.017091 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="c157b73b-8217-4593-bfa1-ed8b0191ec7e" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://e03291fba53d205068c9d3d3235f45af59ca514e95f692441d014b55a7efc62b" gracePeriod=30 Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.022352 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapifaf0-account-delete-hd8cb"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.035956 4893 scope.go:117] "RemoveContainer" containerID="e46f74a87a6d653b728d677939ec52fccc3f117a41b40d42c9b9467c11380c55" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.062290 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9wp9g"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.062405 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapifaf0-account-delete-hd8cb" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.100384 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9wp9g"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.114720 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d49j5\" (UniqueName: \"kubernetes.io/projected/44befa59-1b8c-48a6-8ff8-5768d8d5f2c0-kube-api-access-d49j5\") pod \"novaapifaf0-account-delete-hd8cb\" (UID: \"44befa59-1b8c-48a6-8ff8-5768d8d5f2c0\") " pod="nova-kuttl-default/novaapifaf0-account-delete-hd8cb" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.114802 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3038d018-7447-46e9-ab20-20d56a9717b6-operator-scripts\") pod \"novacell1ce07-account-delete-qw4m6\" (UID: \"3038d018-7447-46e9-ab20-20d56a9717b6\") " pod="nova-kuttl-default/novacell1ce07-account-delete-qw4m6" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.114863 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44befa59-1b8c-48a6-8ff8-5768d8d5f2c0-operator-scripts\") pod \"novaapifaf0-account-delete-hd8cb\" (UID: \"44befa59-1b8c-48a6-8ff8-5768d8d5f2c0\") " pod="nova-kuttl-default/novaapifaf0-account-delete-hd8cb" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.115022 4893 scope.go:117] "RemoveContainer" containerID="c15eb4b0bfb9749c0070e834cd2e6d33bdb373277de7e73264f78dfda18c3f91" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.115252 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5ljx\" (UniqueName: \"kubernetes.io/projected/3038d018-7447-46e9-ab20-20d56a9717b6-kube-api-access-d5ljx\") pod \"novacell1ce07-account-delete-qw4m6\" (UID: \"3038d018-7447-46e9-ab20-20d56a9717b6\") " pod="nova-kuttl-default/novacell1ce07-account-delete-qw4m6" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.146772 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapifaf0-account-delete-hd8cb"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.166955 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.167436 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="fc5e5f56-6f65-41e2-9d47-fe5a59541a00" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://4f44fa1204db3503cb800782b765933bcaeca9ba851a533a9f3ee1e6defdf509" gracePeriod=30 Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.194409 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell0c310-account-delete-6bxln"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.195842 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0c310-account-delete-6bxln" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.212854 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0c310-account-delete-6bxln"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.221290 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5ljx\" (UniqueName: \"kubernetes.io/projected/3038d018-7447-46e9-ab20-20d56a9717b6-kube-api-access-d5ljx\") pod \"novacell1ce07-account-delete-qw4m6\" (UID: \"3038d018-7447-46e9-ab20-20d56a9717b6\") " pod="nova-kuttl-default/novacell1ce07-account-delete-qw4m6" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.221360 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d49j5\" (UniqueName: \"kubernetes.io/projected/44befa59-1b8c-48a6-8ff8-5768d8d5f2c0-kube-api-access-d49j5\") pod \"novaapifaf0-account-delete-hd8cb\" (UID: \"44befa59-1b8c-48a6-8ff8-5768d8d5f2c0\") " pod="nova-kuttl-default/novaapifaf0-account-delete-hd8cb" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.221408 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psv5k\" (UniqueName: \"kubernetes.io/projected/4d26e719-2040-449b-9b38-66cc87bb9d63-kube-api-access-psv5k\") pod \"novacell0c310-account-delete-6bxln\" (UID: \"4d26e719-2040-449b-9b38-66cc87bb9d63\") " pod="nova-kuttl-default/novacell0c310-account-delete-6bxln" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.221455 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3038d018-7447-46e9-ab20-20d56a9717b6-operator-scripts\") pod \"novacell1ce07-account-delete-qw4m6\" (UID: \"3038d018-7447-46e9-ab20-20d56a9717b6\") " pod="nova-kuttl-default/novacell1ce07-account-delete-qw4m6" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.221609 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d26e719-2040-449b-9b38-66cc87bb9d63-operator-scripts\") pod \"novacell0c310-account-delete-6bxln\" (UID: \"4d26e719-2040-449b-9b38-66cc87bb9d63\") " pod="nova-kuttl-default/novacell0c310-account-delete-6bxln" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.221660 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44befa59-1b8c-48a6-8ff8-5768d8d5f2c0-operator-scripts\") pod \"novaapifaf0-account-delete-hd8cb\" (UID: \"44befa59-1b8c-48a6-8ff8-5768d8d5f2c0\") " pod="nova-kuttl-default/novaapifaf0-account-delete-hd8cb" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.224429 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3038d018-7447-46e9-ab20-20d56a9717b6-operator-scripts\") pod \"novacell1ce07-account-delete-qw4m6\" (UID: \"3038d018-7447-46e9-ab20-20d56a9717b6\") " pod="nova-kuttl-default/novacell1ce07-account-delete-qw4m6" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.228967 4893 scope.go:117] "RemoveContainer" containerID="3928c18638bce182ad954ac7350c526448951ade6ef06f7602af421efe37460d" Jan 28 15:28:59 crc kubenswrapper[4893]: E0128 15:28:59.229422 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3928c18638bce182ad954ac7350c526448951ade6ef06f7602af421efe37460d\": container with ID starting with 3928c18638bce182ad954ac7350c526448951ade6ef06f7602af421efe37460d not found: ID does not exist" containerID="3928c18638bce182ad954ac7350c526448951ade6ef06f7602af421efe37460d" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.229520 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3928c18638bce182ad954ac7350c526448951ade6ef06f7602af421efe37460d"} err="failed to get container status \"3928c18638bce182ad954ac7350c526448951ade6ef06f7602af421efe37460d\": rpc error: code = NotFound desc = could not find container \"3928c18638bce182ad954ac7350c526448951ade6ef06f7602af421efe37460d\": container with ID starting with 3928c18638bce182ad954ac7350c526448951ade6ef06f7602af421efe37460d not found: ID does not exist" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.229555 4893 scope.go:117] "RemoveContainer" containerID="e46f74a87a6d653b728d677939ec52fccc3f117a41b40d42c9b9467c11380c55" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.229717 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44befa59-1b8c-48a6-8ff8-5768d8d5f2c0-operator-scripts\") pod \"novaapifaf0-account-delete-hd8cb\" (UID: \"44befa59-1b8c-48a6-8ff8-5768d8d5f2c0\") " pod="nova-kuttl-default/novaapifaf0-account-delete-hd8cb" Jan 28 15:28:59 crc kubenswrapper[4893]: E0128 15:28:59.233772 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e46f74a87a6d653b728d677939ec52fccc3f117a41b40d42c9b9467c11380c55\": container with ID starting with e46f74a87a6d653b728d677939ec52fccc3f117a41b40d42c9b9467c11380c55 not found: ID does not exist" containerID="e46f74a87a6d653b728d677939ec52fccc3f117a41b40d42c9b9467c11380c55" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.233844 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e46f74a87a6d653b728d677939ec52fccc3f117a41b40d42c9b9467c11380c55"} err="failed to get container status \"e46f74a87a6d653b728d677939ec52fccc3f117a41b40d42c9b9467c11380c55\": rpc error: code = NotFound desc = could not find container \"e46f74a87a6d653b728d677939ec52fccc3f117a41b40d42c9b9467c11380c55\": container with ID starting with e46f74a87a6d653b728d677939ec52fccc3f117a41b40d42c9b9467c11380c55 not found: ID does not exist" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.233877 4893 scope.go:117] "RemoveContainer" containerID="c15eb4b0bfb9749c0070e834cd2e6d33bdb373277de7e73264f78dfda18c3f91" Jan 28 15:28:59 crc kubenswrapper[4893]: E0128 15:28:59.247715 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c15eb4b0bfb9749c0070e834cd2e6d33bdb373277de7e73264f78dfda18c3f91\": container with ID starting with c15eb4b0bfb9749c0070e834cd2e6d33bdb373277de7e73264f78dfda18c3f91 not found: ID does not exist" containerID="c15eb4b0bfb9749c0070e834cd2e6d33bdb373277de7e73264f78dfda18c3f91" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.247786 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c15eb4b0bfb9749c0070e834cd2e6d33bdb373277de7e73264f78dfda18c3f91"} err="failed to get container status \"c15eb4b0bfb9749c0070e834cd2e6d33bdb373277de7e73264f78dfda18c3f91\": rpc error: code = NotFound desc = could not find container \"c15eb4b0bfb9749c0070e834cd2e6d33bdb373277de7e73264f78dfda18c3f91\": container with ID starting with c15eb4b0bfb9749c0070e834cd2e6d33bdb373277de7e73264f78dfda18c3f91 not found: ID does not exist" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.256919 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5ljx\" (UniqueName: \"kubernetes.io/projected/3038d018-7447-46e9-ab20-20d56a9717b6-kube-api-access-d5ljx\") pod \"novacell1ce07-account-delete-qw4m6\" (UID: \"3038d018-7447-46e9-ab20-20d56a9717b6\") " pod="nova-kuttl-default/novacell1ce07-account-delete-qw4m6" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.268069 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.268391 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="aef6e7fe-a1a0-4a6c-9c00-ba875605428b" containerName="nova-kuttl-api-log" containerID="cri-o://f3392203e41a092353a673e5cfd80d2beb0dcb2175bcfcc07fbe6a5df465b61a" gracePeriod=30 Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.268582 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="aef6e7fe-a1a0-4a6c-9c00-ba875605428b" containerName="nova-kuttl-api-api" containerID="cri-o://75606480fc61ecb85560c328753104bbd1c6f4d2a9da3715865a87e986b59849" gracePeriod=30 Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.274751 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d49j5\" (UniqueName: \"kubernetes.io/projected/44befa59-1b8c-48a6-8ff8-5768d8d5f2c0-kube-api-access-d49j5\") pod \"novaapifaf0-account-delete-hd8cb\" (UID: \"44befa59-1b8c-48a6-8ff8-5768d8d5f2c0\") " pod="nova-kuttl-default/novaapifaf0-account-delete-hd8cb" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.282620 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.282896 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="671633bc-0311-475f-9e70-b101fa5257ad" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://209d36d98c0be75b9759ce5a3e10e5042786d1fa0dfebc8671e432a3f65f3890" gracePeriod=30 Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.292190 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.301085 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-82rfx"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.323640 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.323997 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psv5k\" (UniqueName: \"kubernetes.io/projected/4d26e719-2040-449b-9b38-66cc87bb9d63-kube-api-access-psv5k\") pod \"novacell0c310-account-delete-6bxln\" (UID: \"4d26e719-2040-449b-9b38-66cc87bb9d63\") " pod="nova-kuttl-default/novacell0c310-account-delete-6bxln" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.324080 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d26e719-2040-449b-9b38-66cc87bb9d63-operator-scripts\") pod \"novacell0c310-account-delete-6bxln\" (UID: \"4d26e719-2040-449b-9b38-66cc87bb9d63\") " pod="nova-kuttl-default/novacell0c310-account-delete-6bxln" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.325125 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d26e719-2040-449b-9b38-66cc87bb9d63-operator-scripts\") pod \"novacell0c310-account-delete-6bxln\" (UID: \"4d26e719-2040-449b-9b38-66cc87bb9d63\") " pod="nova-kuttl-default/novacell0c310-account-delete-6bxln" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.336202 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.336520 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="db584031-a14c-4916-a5de-767628445966" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://3b74684b7ff31dd2de1020b80d60cadc54168506dc25324121a4b86128900c97" gracePeriod=30 Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.346751 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-29hll"] Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.350665 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psv5k\" (UniqueName: \"kubernetes.io/projected/4d26e719-2040-449b-9b38-66cc87bb9d63-kube-api-access-psv5k\") pod \"novacell0c310-account-delete-6bxln\" (UID: \"4d26e719-2040-449b-9b38-66cc87bb9d63\") " pod="nova-kuttl-default/novacell0c310-account-delete-6bxln" Jan 28 15:28:59 crc kubenswrapper[4893]: E0128 15:28:59.424393 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b74684b7ff31dd2de1020b80d60cadc54168506dc25324121a4b86128900c97" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:28:59 crc kubenswrapper[4893]: E0128 15:28:59.426215 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b74684b7ff31dd2de1020b80d60cadc54168506dc25324121a4b86128900c97" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:28:59 crc kubenswrapper[4893]: E0128 15:28:59.427579 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b74684b7ff31dd2de1020b80d60cadc54168506dc25324121a4b86128900c97" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:28:59 crc kubenswrapper[4893]: E0128 15:28:59.427633 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="db584031-a14c-4916-a5de-767628445966" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.498303 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapifaf0-account-delete-hd8cb" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.530666 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0c310-account-delete-6bxln" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.532075 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1ce07-account-delete-qw4m6" Jan 28 15:28:59 crc kubenswrapper[4893]: I0128 15:28:59.990848 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapifaf0-account-delete-hd8cb"] Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.007688 4893 generic.go:334] "Generic (PLEG): container finished" podID="79781d2a-b011-48ea-a6ef-038161633a26" containerID="c875e9afafaae49b47ee26abb590701a147bab3af3ac374bd6fd73365949911b" exitCode=143 Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.007759 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"79781d2a-b011-48ea-a6ef-038161633a26","Type":"ContainerDied","Data":"c875e9afafaae49b47ee26abb590701a147bab3af3ac374bd6fd73365949911b"} Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.011494 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapifaf0-account-delete-hd8cb" event={"ID":"44befa59-1b8c-48a6-8ff8-5768d8d5f2c0","Type":"ContainerStarted","Data":"167fa75e5faf789fb7452c079a6150524984c1a97fe15b2cdf91ffbeecb21e65"} Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.017628 4893 generic.go:334] "Generic (PLEG): container finished" podID="fc5e5f56-6f65-41e2-9d47-fe5a59541a00" containerID="4f44fa1204db3503cb800782b765933bcaeca9ba851a533a9f3ee1e6defdf509" exitCode=0 Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.017691 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"fc5e5f56-6f65-41e2-9d47-fe5a59541a00","Type":"ContainerDied","Data":"4f44fa1204db3503cb800782b765933bcaeca9ba851a533a9f3ee1e6defdf509"} Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.027617 4893 generic.go:334] "Generic (PLEG): container finished" podID="aef6e7fe-a1a0-4a6c-9c00-ba875605428b" containerID="f3392203e41a092353a673e5cfd80d2beb0dcb2175bcfcc07fbe6a5df465b61a" exitCode=143 Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.027678 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"aef6e7fe-a1a0-4a6c-9c00-ba875605428b","Type":"ContainerDied","Data":"f3392203e41a092353a673e5cfd80d2beb0dcb2175bcfcc07fbe6a5df465b61a"} Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.052799 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1ce07-account-delete-qw4m6"] Jan 28 15:29:00 crc kubenswrapper[4893]: W0128 15:29:00.076757 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3038d018_7447_46e9_ab20_20d56a9717b6.slice/crio-140ec349a41dd275a624b5495cdd8fe9998f9169f0e639ea1166b4c0afdd4b9a WatchSource:0}: Error finding container 140ec349a41dd275a624b5495cdd8fe9998f9169f0e639ea1166b4c0afdd4b9a: Status 404 returned error can't find the container with id 140ec349a41dd275a624b5495cdd8fe9998f9169f0e639ea1166b4c0afdd4b9a Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.159503 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0c310-account-delete-6bxln"] Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.415829 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.554992 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc5e5f56-6f65-41e2-9d47-fe5a59541a00-config-data\") pod \"fc5e5f56-6f65-41e2-9d47-fe5a59541a00\" (UID: \"fc5e5f56-6f65-41e2-9d47-fe5a59541a00\") " Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.555084 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v4sq\" (UniqueName: \"kubernetes.io/projected/fc5e5f56-6f65-41e2-9d47-fe5a59541a00-kube-api-access-2v4sq\") pod \"fc5e5f56-6f65-41e2-9d47-fe5a59541a00\" (UID: \"fc5e5f56-6f65-41e2-9d47-fe5a59541a00\") " Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.561117 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5e5f56-6f65-41e2-9d47-fe5a59541a00-kube-api-access-2v4sq" (OuterVolumeSpecName: "kube-api-access-2v4sq") pod "fc5e5f56-6f65-41e2-9d47-fe5a59541a00" (UID: "fc5e5f56-6f65-41e2-9d47-fe5a59541a00"). InnerVolumeSpecName "kube-api-access-2v4sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.584143 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5e5f56-6f65-41e2-9d47-fe5a59541a00-config-data" (OuterVolumeSpecName: "config-data") pod "fc5e5f56-6f65-41e2-9d47-fe5a59541a00" (UID: "fc5e5f56-6f65-41e2-9d47-fe5a59541a00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.586700 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.656947 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc5e5f56-6f65-41e2-9d47-fe5a59541a00-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.656976 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v4sq\" (UniqueName: \"kubernetes.io/projected/fc5e5f56-6f65-41e2-9d47-fe5a59541a00-kube-api-access-2v4sq\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.757816 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bvq4\" (UniqueName: \"kubernetes.io/projected/671633bc-0311-475f-9e70-b101fa5257ad-kube-api-access-8bvq4\") pod \"671633bc-0311-475f-9e70-b101fa5257ad\" (UID: \"671633bc-0311-475f-9e70-b101fa5257ad\") " Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.758108 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671633bc-0311-475f-9e70-b101fa5257ad-config-data\") pod \"671633bc-0311-475f-9e70-b101fa5257ad\" (UID: \"671633bc-0311-475f-9e70-b101fa5257ad\") " Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.760445 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/671633bc-0311-475f-9e70-b101fa5257ad-kube-api-access-8bvq4" (OuterVolumeSpecName: "kube-api-access-8bvq4") pod "671633bc-0311-475f-9e70-b101fa5257ad" (UID: "671633bc-0311-475f-9e70-b101fa5257ad"). InnerVolumeSpecName "kube-api-access-8bvq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.780080 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/671633bc-0311-475f-9e70-b101fa5257ad-config-data" (OuterVolumeSpecName: "config-data") pod "671633bc-0311-475f-9e70-b101fa5257ad" (UID: "671633bc-0311-475f-9e70-b101fa5257ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.860436 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671633bc-0311-475f-9e70-b101fa5257ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.860490 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bvq4\" (UniqueName: \"kubernetes.io/projected/671633bc-0311-475f-9e70-b101fa5257ad-kube-api-access-8bvq4\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.901379 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="638660ab-7425-4aec-bc6e-480defa16c71" path="/var/lib/kubelet/pods/638660ab-7425-4aec-bc6e-480defa16c71/volumes" Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.901915 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccdd342a-4f5b-4e5e-adf7-0884eaf53220" path="/var/lib/kubelet/pods/ccdd342a-4f5b-4e5e-adf7-0884eaf53220/volumes" Jan 28 15:29:00 crc kubenswrapper[4893]: I0128 15:29:00.902711 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f671187e-a6f6-47bc-8627-f324e5e1ff10" path="/var/lib/kubelet/pods/f671187e-a6f6-47bc-8627-f324e5e1ff10/volumes" Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.042755 4893 generic.go:334] "Generic (PLEG): container finished" podID="4d26e719-2040-449b-9b38-66cc87bb9d63" containerID="a7ec881599c5217c35142168671f1668a634a165131961ae038c87ce092a710a" exitCode=0 Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.042929 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0c310-account-delete-6bxln" event={"ID":"4d26e719-2040-449b-9b38-66cc87bb9d63","Type":"ContainerDied","Data":"a7ec881599c5217c35142168671f1668a634a165131961ae038c87ce092a710a"} Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.043600 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0c310-account-delete-6bxln" event={"ID":"4d26e719-2040-449b-9b38-66cc87bb9d63","Type":"ContainerStarted","Data":"780bb618461b49fe75d449a7edbbbec1d578fca5b05c0559d37d700143b4728a"} Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.045607 4893 generic.go:334] "Generic (PLEG): container finished" podID="671633bc-0311-475f-9e70-b101fa5257ad" containerID="209d36d98c0be75b9759ce5a3e10e5042786d1fa0dfebc8671e432a3f65f3890" exitCode=0 Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.045639 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.045673 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"671633bc-0311-475f-9e70-b101fa5257ad","Type":"ContainerDied","Data":"209d36d98c0be75b9759ce5a3e10e5042786d1fa0dfebc8671e432a3f65f3890"} Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.045696 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"671633bc-0311-475f-9e70-b101fa5257ad","Type":"ContainerDied","Data":"c7a3de3beb0ae3845a50972296912a22ab94b9d7baa0f84336a4a659f24cb547"} Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.045717 4893 scope.go:117] "RemoveContainer" containerID="209d36d98c0be75b9759ce5a3e10e5042786d1fa0dfebc8671e432a3f65f3890" Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.048922 4893 generic.go:334] "Generic (PLEG): container finished" podID="44befa59-1b8c-48a6-8ff8-5768d8d5f2c0" containerID="33224f119e8b5920f7b73ccd3e2c4b87b2d1767328e2569e3763864dcc54f584" exitCode=0 Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.048996 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapifaf0-account-delete-hd8cb" event={"ID":"44befa59-1b8c-48a6-8ff8-5768d8d5f2c0","Type":"ContainerDied","Data":"33224f119e8b5920f7b73ccd3e2c4b87b2d1767328e2569e3763864dcc54f584"} Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.050970 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"fc5e5f56-6f65-41e2-9d47-fe5a59541a00","Type":"ContainerDied","Data":"2dfb1af8c6f96982ddeb307352502bddc1f52bc5391d71fec41cd405fa5dac9e"} Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.050982 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.057887 4893 generic.go:334] "Generic (PLEG): container finished" podID="3038d018-7447-46e9-ab20-20d56a9717b6" containerID="3d1b403d5632b8cf08f0c888989edd025237e7157b6141a656eaf8fd87353ba5" exitCode=0 Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.057947 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1ce07-account-delete-qw4m6" event={"ID":"3038d018-7447-46e9-ab20-20d56a9717b6","Type":"ContainerDied","Data":"3d1b403d5632b8cf08f0c888989edd025237e7157b6141a656eaf8fd87353ba5"} Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.057974 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1ce07-account-delete-qw4m6" event={"ID":"3038d018-7447-46e9-ab20-20d56a9717b6","Type":"ContainerStarted","Data":"140ec349a41dd275a624b5495cdd8fe9998f9169f0e639ea1166b4c0afdd4b9a"} Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.078957 4893 scope.go:117] "RemoveContainer" containerID="209d36d98c0be75b9759ce5a3e10e5042786d1fa0dfebc8671e432a3f65f3890" Jan 28 15:29:01 crc kubenswrapper[4893]: E0128 15:29:01.083043 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209d36d98c0be75b9759ce5a3e10e5042786d1fa0dfebc8671e432a3f65f3890\": container with ID starting with 209d36d98c0be75b9759ce5a3e10e5042786d1fa0dfebc8671e432a3f65f3890 not found: ID does not exist" containerID="209d36d98c0be75b9759ce5a3e10e5042786d1fa0dfebc8671e432a3f65f3890" Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.083086 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209d36d98c0be75b9759ce5a3e10e5042786d1fa0dfebc8671e432a3f65f3890"} err="failed to get container status \"209d36d98c0be75b9759ce5a3e10e5042786d1fa0dfebc8671e432a3f65f3890\": rpc error: code = NotFound desc = could not find container \"209d36d98c0be75b9759ce5a3e10e5042786d1fa0dfebc8671e432a3f65f3890\": container with ID starting with 209d36d98c0be75b9759ce5a3e10e5042786d1fa0dfebc8671e432a3f65f3890 not found: ID does not exist" Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.083113 4893 scope.go:117] "RemoveContainer" containerID="4f44fa1204db3503cb800782b765933bcaeca9ba851a533a9f3ee1e6defdf509" Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.127271 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.134631 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.156960 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:29:01 crc kubenswrapper[4893]: I0128 15:29:01.161164 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.066354 4893 generic.go:334] "Generic (PLEG): container finished" podID="c157b73b-8217-4593-bfa1-ed8b0191ec7e" containerID="e03291fba53d205068c9d3d3235f45af59ca514e95f692441d014b55a7efc62b" exitCode=0 Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.066540 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"c157b73b-8217-4593-bfa1-ed8b0191ec7e","Type":"ContainerDied","Data":"e03291fba53d205068c9d3d3235f45af59ca514e95f692441d014b55a7efc62b"} Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.218583 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.402889 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c157b73b-8217-4593-bfa1-ed8b0191ec7e-config-data\") pod \"c157b73b-8217-4593-bfa1-ed8b0191ec7e\" (UID: \"c157b73b-8217-4593-bfa1-ed8b0191ec7e\") " Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.403331 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgvts\" (UniqueName: \"kubernetes.io/projected/c157b73b-8217-4593-bfa1-ed8b0191ec7e-kube-api-access-pgvts\") pod \"c157b73b-8217-4593-bfa1-ed8b0191ec7e\" (UID: \"c157b73b-8217-4593-bfa1-ed8b0191ec7e\") " Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.439081 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c157b73b-8217-4593-bfa1-ed8b0191ec7e-config-data" (OuterVolumeSpecName: "config-data") pod "c157b73b-8217-4593-bfa1-ed8b0191ec7e" (UID: "c157b73b-8217-4593-bfa1-ed8b0191ec7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.439071 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c157b73b-8217-4593-bfa1-ed8b0191ec7e-kube-api-access-pgvts" (OuterVolumeSpecName: "kube-api-access-pgvts") pod "c157b73b-8217-4593-bfa1-ed8b0191ec7e" (UID: "c157b73b-8217-4593-bfa1-ed8b0191ec7e"). InnerVolumeSpecName "kube-api-access-pgvts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.493086 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="79781d2a-b011-48ea-a6ef-038161633a26" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.137:8775/\": read tcp 10.217.0.2:42012->10.217.0.137:8775: read: connection reset by peer" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.493122 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="79781d2a-b011-48ea-a6ef-038161633a26" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.137:8775/\": read tcp 10.217.0.2:42028->10.217.0.137:8775: read: connection reset by peer" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.505545 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c157b73b-8217-4593-bfa1-ed8b0191ec7e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.505577 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgvts\" (UniqueName: \"kubernetes.io/projected/c157b73b-8217-4593-bfa1-ed8b0191ec7e-kube-api-access-pgvts\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.625415 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0c310-account-delete-6bxln" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.636383 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1ce07-account-delete-qw4m6" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.641336 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapifaf0-account-delete-hd8cb" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.786368 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.815734 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-config-data\") pod \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\" (UID: \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\") " Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.815815 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-logs\") pod \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\" (UID: \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\") " Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.815850 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d26e719-2040-449b-9b38-66cc87bb9d63-operator-scripts\") pod \"4d26e719-2040-449b-9b38-66cc87bb9d63\" (UID: \"4d26e719-2040-449b-9b38-66cc87bb9d63\") " Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.815879 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44befa59-1b8c-48a6-8ff8-5768d8d5f2c0-operator-scripts\") pod \"44befa59-1b8c-48a6-8ff8-5768d8d5f2c0\" (UID: \"44befa59-1b8c-48a6-8ff8-5768d8d5f2c0\") " Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.815919 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92z74\" (UniqueName: \"kubernetes.io/projected/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-kube-api-access-92z74\") pod \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\" (UID: \"aef6e7fe-a1a0-4a6c-9c00-ba875605428b\") " Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.815981 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5ljx\" (UniqueName: \"kubernetes.io/projected/3038d018-7447-46e9-ab20-20d56a9717b6-kube-api-access-d5ljx\") pod \"3038d018-7447-46e9-ab20-20d56a9717b6\" (UID: \"3038d018-7447-46e9-ab20-20d56a9717b6\") " Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.816009 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3038d018-7447-46e9-ab20-20d56a9717b6-operator-scripts\") pod \"3038d018-7447-46e9-ab20-20d56a9717b6\" (UID: \"3038d018-7447-46e9-ab20-20d56a9717b6\") " Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.816039 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psv5k\" (UniqueName: \"kubernetes.io/projected/4d26e719-2040-449b-9b38-66cc87bb9d63-kube-api-access-psv5k\") pod \"4d26e719-2040-449b-9b38-66cc87bb9d63\" (UID: \"4d26e719-2040-449b-9b38-66cc87bb9d63\") " Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.816122 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d49j5\" (UniqueName: \"kubernetes.io/projected/44befa59-1b8c-48a6-8ff8-5768d8d5f2c0-kube-api-access-d49j5\") pod \"44befa59-1b8c-48a6-8ff8-5768d8d5f2c0\" (UID: \"44befa59-1b8c-48a6-8ff8-5768d8d5f2c0\") " Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.816384 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-logs" (OuterVolumeSpecName: "logs") pod "aef6e7fe-a1a0-4a6c-9c00-ba875605428b" (UID: "aef6e7fe-a1a0-4a6c-9c00-ba875605428b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.816464 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.817126 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d26e719-2040-449b-9b38-66cc87bb9d63-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d26e719-2040-449b-9b38-66cc87bb9d63" (UID: "4d26e719-2040-449b-9b38-66cc87bb9d63"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.817165 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3038d018-7447-46e9-ab20-20d56a9717b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3038d018-7447-46e9-ab20-20d56a9717b6" (UID: "3038d018-7447-46e9-ab20-20d56a9717b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.817647 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44befa59-1b8c-48a6-8ff8-5768d8d5f2c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44befa59-1b8c-48a6-8ff8-5768d8d5f2c0" (UID: "44befa59-1b8c-48a6-8ff8-5768d8d5f2c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.821580 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44befa59-1b8c-48a6-8ff8-5768d8d5f2c0-kube-api-access-d49j5" (OuterVolumeSpecName: "kube-api-access-d49j5") pod "44befa59-1b8c-48a6-8ff8-5768d8d5f2c0" (UID: "44befa59-1b8c-48a6-8ff8-5768d8d5f2c0"). InnerVolumeSpecName "kube-api-access-d49j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.821834 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-kube-api-access-92z74" (OuterVolumeSpecName: "kube-api-access-92z74") pod "aef6e7fe-a1a0-4a6c-9c00-ba875605428b" (UID: "aef6e7fe-a1a0-4a6c-9c00-ba875605428b"). InnerVolumeSpecName "kube-api-access-92z74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.821949 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d26e719-2040-449b-9b38-66cc87bb9d63-kube-api-access-psv5k" (OuterVolumeSpecName: "kube-api-access-psv5k") pod "4d26e719-2040-449b-9b38-66cc87bb9d63" (UID: "4d26e719-2040-449b-9b38-66cc87bb9d63"). InnerVolumeSpecName "kube-api-access-psv5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.823000 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3038d018-7447-46e9-ab20-20d56a9717b6-kube-api-access-d5ljx" (OuterVolumeSpecName: "kube-api-access-d5ljx") pod "3038d018-7447-46e9-ab20-20d56a9717b6" (UID: "3038d018-7447-46e9-ab20-20d56a9717b6"). InnerVolumeSpecName "kube-api-access-d5ljx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.851281 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.852163 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-config-data" (OuterVolumeSpecName: "config-data") pod "aef6e7fe-a1a0-4a6c-9c00-ba875605428b" (UID: "aef6e7fe-a1a0-4a6c-9c00-ba875605428b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.904956 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="671633bc-0311-475f-9e70-b101fa5257ad" path="/var/lib/kubelet/pods/671633bc-0311-475f-9e70-b101fa5257ad/volumes" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.905569 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc5e5f56-6f65-41e2-9d47-fe5a59541a00" path="/var/lib/kubelet/pods/fc5e5f56-6f65-41e2-9d47-fe5a59541a00/volumes" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.916843 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllm8\" (UniqueName: \"kubernetes.io/projected/79781d2a-b011-48ea-a6ef-038161633a26-kube-api-access-pllm8\") pod \"79781d2a-b011-48ea-a6ef-038161633a26\" (UID: \"79781d2a-b011-48ea-a6ef-038161633a26\") " Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.916911 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79781d2a-b011-48ea-a6ef-038161633a26-config-data\") pod \"79781d2a-b011-48ea-a6ef-038161633a26\" (UID: \"79781d2a-b011-48ea-a6ef-038161633a26\") " Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.916946 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79781d2a-b011-48ea-a6ef-038161633a26-logs\") pod \"79781d2a-b011-48ea-a6ef-038161633a26\" (UID: \"79781d2a-b011-48ea-a6ef-038161633a26\") " Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.917316 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d49j5\" (UniqueName: \"kubernetes.io/projected/44befa59-1b8c-48a6-8ff8-5768d8d5f2c0-kube-api-access-d49j5\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.917340 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.917352 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d26e719-2040-449b-9b38-66cc87bb9d63-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.917363 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44befa59-1b8c-48a6-8ff8-5768d8d5f2c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.917374 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92z74\" (UniqueName: \"kubernetes.io/projected/aef6e7fe-a1a0-4a6c-9c00-ba875605428b-kube-api-access-92z74\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.917385 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5ljx\" (UniqueName: \"kubernetes.io/projected/3038d018-7447-46e9-ab20-20d56a9717b6-kube-api-access-d5ljx\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.917395 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3038d018-7447-46e9-ab20-20d56a9717b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.917408 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psv5k\" (UniqueName: \"kubernetes.io/projected/4d26e719-2040-449b-9b38-66cc87bb9d63-kube-api-access-psv5k\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.918594 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79781d2a-b011-48ea-a6ef-038161633a26-logs" (OuterVolumeSpecName: "logs") pod "79781d2a-b011-48ea-a6ef-038161633a26" (UID: "79781d2a-b011-48ea-a6ef-038161633a26"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.925197 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79781d2a-b011-48ea-a6ef-038161633a26-kube-api-access-pllm8" (OuterVolumeSpecName: "kube-api-access-pllm8") pod "79781d2a-b011-48ea-a6ef-038161633a26" (UID: "79781d2a-b011-48ea-a6ef-038161633a26"). InnerVolumeSpecName "kube-api-access-pllm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:02 crc kubenswrapper[4893]: I0128 15:29:02.967289 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79781d2a-b011-48ea-a6ef-038161633a26-config-data" (OuterVolumeSpecName: "config-data") pod "79781d2a-b011-48ea-a6ef-038161633a26" (UID: "79781d2a-b011-48ea-a6ef-038161633a26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.019092 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pllm8\" (UniqueName: \"kubernetes.io/projected/79781d2a-b011-48ea-a6ef-038161633a26-kube-api-access-pllm8\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.019356 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79781d2a-b011-48ea-a6ef-038161633a26-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.019372 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79781d2a-b011-48ea-a6ef-038161633a26-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.078707 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1ce07-account-delete-qw4m6" event={"ID":"3038d018-7447-46e9-ab20-20d56a9717b6","Type":"ContainerDied","Data":"140ec349a41dd275a624b5495cdd8fe9998f9169f0e639ea1166b4c0afdd4b9a"} Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.078779 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="140ec349a41dd275a624b5495cdd8fe9998f9169f0e639ea1166b4c0afdd4b9a" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.078745 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1ce07-account-delete-qw4m6" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.081459 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.081509 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"aef6e7fe-a1a0-4a6c-9c00-ba875605428b","Type":"ContainerDied","Data":"75606480fc61ecb85560c328753104bbd1c6f4d2a9da3715865a87e986b59849"} Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.081564 4893 scope.go:117] "RemoveContainer" containerID="75606480fc61ecb85560c328753104bbd1c6f4d2a9da3715865a87e986b59849" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.081465 4893 generic.go:334] "Generic (PLEG): container finished" podID="aef6e7fe-a1a0-4a6c-9c00-ba875605428b" containerID="75606480fc61ecb85560c328753104bbd1c6f4d2a9da3715865a87e986b59849" exitCode=0 Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.081742 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"aef6e7fe-a1a0-4a6c-9c00-ba875605428b","Type":"ContainerDied","Data":"b573cb650459f23b55ce61ed2468f3b414b1fde0aa78aea5c83012629493ad17"} Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.084819 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0c310-account-delete-6bxln" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.084818 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0c310-account-delete-6bxln" event={"ID":"4d26e719-2040-449b-9b38-66cc87bb9d63","Type":"ContainerDied","Data":"780bb618461b49fe75d449a7edbbbec1d578fca5b05c0559d37d700143b4728a"} Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.084860 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="780bb618461b49fe75d449a7edbbbec1d578fca5b05c0559d37d700143b4728a" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.086568 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"c157b73b-8217-4593-bfa1-ed8b0191ec7e","Type":"ContainerDied","Data":"dfc5c538b748943a0ed84024509d92613cdb22e60aab23af4d0a64e686932283"} Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.086577 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.088354 4893 generic.go:334] "Generic (PLEG): container finished" podID="79781d2a-b011-48ea-a6ef-038161633a26" containerID="6e2b6ceef01087ae6c36c2862df43479e126a2e0a17c3c6e4303587f6ae89a81" exitCode=0 Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.088401 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"79781d2a-b011-48ea-a6ef-038161633a26","Type":"ContainerDied","Data":"6e2b6ceef01087ae6c36c2862df43479e126a2e0a17c3c6e4303587f6ae89a81"} Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.088416 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"79781d2a-b011-48ea-a6ef-038161633a26","Type":"ContainerDied","Data":"48bc9df7ca656e80e81d8465c5922c351f7cc3b0e8d875f9affd32ef1bdec7f2"} Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.088507 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.090647 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapifaf0-account-delete-hd8cb" event={"ID":"44befa59-1b8c-48a6-8ff8-5768d8d5f2c0","Type":"ContainerDied","Data":"167fa75e5faf789fb7452c079a6150524984c1a97fe15b2cdf91ffbeecb21e65"} Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.090703 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="167fa75e5faf789fb7452c079a6150524984c1a97fe15b2cdf91ffbeecb21e65" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.090767 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapifaf0-account-delete-hd8cb" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.107910 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.118085 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.121812 4893 scope.go:117] "RemoveContainer" containerID="f3392203e41a092353a673e5cfd80d2beb0dcb2175bcfcc07fbe6a5df465b61a" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.134976 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.140517 4893 scope.go:117] "RemoveContainer" containerID="75606480fc61ecb85560c328753104bbd1c6f4d2a9da3715865a87e986b59849" Jan 28 15:29:03 crc kubenswrapper[4893]: E0128 15:29:03.142892 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75606480fc61ecb85560c328753104bbd1c6f4d2a9da3715865a87e986b59849\": container with ID starting with 75606480fc61ecb85560c328753104bbd1c6f4d2a9da3715865a87e986b59849 not found: ID does not exist" containerID="75606480fc61ecb85560c328753104bbd1c6f4d2a9da3715865a87e986b59849" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.142946 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75606480fc61ecb85560c328753104bbd1c6f4d2a9da3715865a87e986b59849"} err="failed to get container status \"75606480fc61ecb85560c328753104bbd1c6f4d2a9da3715865a87e986b59849\": rpc error: code = NotFound desc = could not find container \"75606480fc61ecb85560c328753104bbd1c6f4d2a9da3715865a87e986b59849\": container with ID starting with 75606480fc61ecb85560c328753104bbd1c6f4d2a9da3715865a87e986b59849 not found: ID does not exist" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.142977 4893 scope.go:117] "RemoveContainer" containerID="f3392203e41a092353a673e5cfd80d2beb0dcb2175bcfcc07fbe6a5df465b61a" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.143496 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:29:03 crc kubenswrapper[4893]: E0128 15:29:03.144457 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3392203e41a092353a673e5cfd80d2beb0dcb2175bcfcc07fbe6a5df465b61a\": container with ID starting with f3392203e41a092353a673e5cfd80d2beb0dcb2175bcfcc07fbe6a5df465b61a not found: ID does not exist" containerID="f3392203e41a092353a673e5cfd80d2beb0dcb2175bcfcc07fbe6a5df465b61a" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.144542 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3392203e41a092353a673e5cfd80d2beb0dcb2175bcfcc07fbe6a5df465b61a"} err="failed to get container status \"f3392203e41a092353a673e5cfd80d2beb0dcb2175bcfcc07fbe6a5df465b61a\": rpc error: code = NotFound desc = could not find container \"f3392203e41a092353a673e5cfd80d2beb0dcb2175bcfcc07fbe6a5df465b61a\": container with ID starting with f3392203e41a092353a673e5cfd80d2beb0dcb2175bcfcc07fbe6a5df465b61a not found: ID does not exist" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.144574 4893 scope.go:117] "RemoveContainer" containerID="e03291fba53d205068c9d3d3235f45af59ca514e95f692441d014b55a7efc62b" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.150552 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.157505 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.212849 4893 scope.go:117] "RemoveContainer" containerID="6e2b6ceef01087ae6c36c2862df43479e126a2e0a17c3c6e4303587f6ae89a81" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.234288 4893 scope.go:117] "RemoveContainer" containerID="c875e9afafaae49b47ee26abb590701a147bab3af3ac374bd6fd73365949911b" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.256649 4893 scope.go:117] "RemoveContainer" containerID="6e2b6ceef01087ae6c36c2862df43479e126a2e0a17c3c6e4303587f6ae89a81" Jan 28 15:29:03 crc kubenswrapper[4893]: E0128 15:29:03.257701 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e2b6ceef01087ae6c36c2862df43479e126a2e0a17c3c6e4303587f6ae89a81\": container with ID starting with 6e2b6ceef01087ae6c36c2862df43479e126a2e0a17c3c6e4303587f6ae89a81 not found: ID does not exist" containerID="6e2b6ceef01087ae6c36c2862df43479e126a2e0a17c3c6e4303587f6ae89a81" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.257732 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e2b6ceef01087ae6c36c2862df43479e126a2e0a17c3c6e4303587f6ae89a81"} err="failed to get container status \"6e2b6ceef01087ae6c36c2862df43479e126a2e0a17c3c6e4303587f6ae89a81\": rpc error: code = NotFound desc = could not find container \"6e2b6ceef01087ae6c36c2862df43479e126a2e0a17c3c6e4303587f6ae89a81\": container with ID starting with 6e2b6ceef01087ae6c36c2862df43479e126a2e0a17c3c6e4303587f6ae89a81 not found: ID does not exist" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.257757 4893 scope.go:117] "RemoveContainer" containerID="c875e9afafaae49b47ee26abb590701a147bab3af3ac374bd6fd73365949911b" Jan 28 15:29:03 crc kubenswrapper[4893]: E0128 15:29:03.258110 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c875e9afafaae49b47ee26abb590701a147bab3af3ac374bd6fd73365949911b\": container with ID starting with c875e9afafaae49b47ee26abb590701a147bab3af3ac374bd6fd73365949911b not found: ID does not exist" containerID="c875e9afafaae49b47ee26abb590701a147bab3af3ac374bd6fd73365949911b" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.258156 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c875e9afafaae49b47ee26abb590701a147bab3af3ac374bd6fd73365949911b"} err="failed to get container status \"c875e9afafaae49b47ee26abb590701a147bab3af3ac374bd6fd73365949911b\": rpc error: code = NotFound desc = could not find container \"c875e9afafaae49b47ee26abb590701a147bab3af3ac374bd6fd73365949911b\": container with ID starting with c875e9afafaae49b47ee26abb590701a147bab3af3ac374bd6fd73365949911b not found: ID does not exist" Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.940798 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-8sm77"] Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.949013 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-8sm77"] Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.958856 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell1ce07-account-delete-qw4m6"] Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.967027 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell1ce07-account-delete-qw4m6"] Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.973507 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck"] Jan 28 15:29:03 crc kubenswrapper[4893]: I0128 15:29:03.983198 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-ce07-account-create-update-g4vck"] Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.066186 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-lkskx"] Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.080610 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-lkskx"] Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.094901 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx"] Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.102988 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapifaf0-account-delete-hd8cb"] Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.111183 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-faf0-account-create-update-mnxdx"] Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.120755 4893 generic.go:334] "Generic (PLEG): container finished" podID="db584031-a14c-4916-a5de-767628445966" containerID="3b74684b7ff31dd2de1020b80d60cadc54168506dc25324121a4b86128900c97" exitCode=0 Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.120823 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"db584031-a14c-4916-a5de-767628445966","Type":"ContainerDied","Data":"3b74684b7ff31dd2de1020b80d60cadc54168506dc25324121a4b86128900c97"} Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.121302 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapifaf0-account-delete-hd8cb"] Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.170245 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-jwsh9"] Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.179852 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-jwsh9"] Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.200917 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5"] Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.208454 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell0c310-account-delete-6bxln"] Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.214719 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-c310-account-create-update-g6lj5"] Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.220775 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell0c310-account-delete-6bxln"] Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.229857 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.339804 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db584031-a14c-4916-a5de-767628445966-config-data\") pod \"db584031-a14c-4916-a5de-767628445966\" (UID: \"db584031-a14c-4916-a5de-767628445966\") " Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.339855 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d775v\" (UniqueName: \"kubernetes.io/projected/db584031-a14c-4916-a5de-767628445966-kube-api-access-d775v\") pod \"db584031-a14c-4916-a5de-767628445966\" (UID: \"db584031-a14c-4916-a5de-767628445966\") " Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.346309 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db584031-a14c-4916-a5de-767628445966-kube-api-access-d775v" (OuterVolumeSpecName: "kube-api-access-d775v") pod "db584031-a14c-4916-a5de-767628445966" (UID: "db584031-a14c-4916-a5de-767628445966"). InnerVolumeSpecName "kube-api-access-d775v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.364949 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db584031-a14c-4916-a5de-767628445966-config-data" (OuterVolumeSpecName: "config-data") pod "db584031-a14c-4916-a5de-767628445966" (UID: "db584031-a14c-4916-a5de-767628445966"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.441905 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db584031-a14c-4916-a5de-767628445966-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.441959 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d775v\" (UniqueName: \"kubernetes.io/projected/db584031-a14c-4916-a5de-767628445966-kube-api-access-d775v\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.901555 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3038d018-7447-46e9-ab20-20d56a9717b6" path="/var/lib/kubelet/pods/3038d018-7447-46e9-ab20-20d56a9717b6/volumes" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.902194 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ab2220-9c97-48c4-8d5e-f53670f6f731" path="/var/lib/kubelet/pods/34ab2220-9c97-48c4-8d5e-f53670f6f731/volumes" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.902763 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="439767c7-ee3b-4574-979b-9d59e1018a5e" path="/var/lib/kubelet/pods/439767c7-ee3b-4574-979b-9d59e1018a5e/volumes" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.903286 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44befa59-1b8c-48a6-8ff8-5768d8d5f2c0" path="/var/lib/kubelet/pods/44befa59-1b8c-48a6-8ff8-5768d8d5f2c0/volumes" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.904384 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d26e719-2040-449b-9b38-66cc87bb9d63" path="/var/lib/kubelet/pods/4d26e719-2040-449b-9b38-66cc87bb9d63/volumes" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.904965 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="526041d2-fb34-40ee-b6d7-c45e3f38041f" path="/var/lib/kubelet/pods/526041d2-fb34-40ee-b6d7-c45e3f38041f/volumes" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.905610 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79781d2a-b011-48ea-a6ef-038161633a26" path="/var/lib/kubelet/pods/79781d2a-b011-48ea-a6ef-038161633a26/volumes" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.906637 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fcf8457-d45f-44b4-9ec1-2635dfea5f76" path="/var/lib/kubelet/pods/8fcf8457-d45f-44b4-9ec1-2635dfea5f76/volumes" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.907178 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aef6e7fe-a1a0-4a6c-9c00-ba875605428b" path="/var/lib/kubelet/pods/aef6e7fe-a1a0-4a6c-9c00-ba875605428b/volumes" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.907741 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c157b73b-8217-4593-bfa1-ed8b0191ec7e" path="/var/lib/kubelet/pods/c157b73b-8217-4593-bfa1-ed8b0191ec7e/volumes" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.908704 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4070239-a360-41b6-b1c1-27ca8e2c901d" path="/var/lib/kubelet/pods/d4070239-a360-41b6-b1c1-27ca8e2c901d/volumes" Jan 28 15:29:04 crc kubenswrapper[4893]: I0128 15:29:04.909256 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e62730d4-0cfa-41ce-a3e5-7ea0f64739c0" path="/var/lib/kubelet/pods/e62730d4-0cfa-41ce-a3e5-7ea0f64739c0/volumes" Jan 28 15:29:05 crc kubenswrapper[4893]: I0128 15:29:05.135101 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"db584031-a14c-4916-a5de-767628445966","Type":"ContainerDied","Data":"dbc312e37f41ae658b018a1b398c10a6b5e4012c86cbd536d3fce69ec4133461"} Jan 28 15:29:05 crc kubenswrapper[4893]: I0128 15:29:05.135179 4893 scope.go:117] "RemoveContainer" containerID="3b74684b7ff31dd2de1020b80d60cadc54168506dc25324121a4b86128900c97" Jan 28 15:29:05 crc kubenswrapper[4893]: I0128 15:29:05.135347 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:29:05 crc kubenswrapper[4893]: I0128 15:29:05.164264 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:29:05 crc kubenswrapper[4893]: I0128 15:29:05.175306 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:29:05 crc kubenswrapper[4893]: I0128 15:29:05.722774 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:29:05 crc kubenswrapper[4893]: I0128 15:29:05.723123 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:29:05 crc kubenswrapper[4893]: I0128 15:29:05.723172 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:29:05 crc kubenswrapper[4893]: I0128 15:29:05.723878 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51"} pod="openshift-machine-config-operator/machine-config-daemon-l2nht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:29:05 crc kubenswrapper[4893]: I0128 15:29:05.723942 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" containerID="cri-o://79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" gracePeriod=600 Jan 28 15:29:05 crc kubenswrapper[4893]: E0128 15:29:05.864040 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.084626 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-s7sfs"] Jan 28 15:29:06 crc kubenswrapper[4893]: E0128 15:29:06.085270 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79781d2a-b011-48ea-a6ef-038161633a26" containerName="nova-kuttl-metadata-log" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.085353 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="79781d2a-b011-48ea-a6ef-038161633a26" containerName="nova-kuttl-metadata-log" Jan 28 15:29:06 crc kubenswrapper[4893]: E0128 15:29:06.085422 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef6e7fe-a1a0-4a6c-9c00-ba875605428b" containerName="nova-kuttl-api-log" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.085489 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef6e7fe-a1a0-4a6c-9c00-ba875605428b" containerName="nova-kuttl-api-log" Jan 28 15:29:06 crc kubenswrapper[4893]: E0128 15:29:06.085585 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d26e719-2040-449b-9b38-66cc87bb9d63" containerName="mariadb-account-delete" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.085634 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d26e719-2040-449b-9b38-66cc87bb9d63" containerName="mariadb-account-delete" Jan 28 15:29:06 crc kubenswrapper[4893]: E0128 15:29:06.085807 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79781d2a-b011-48ea-a6ef-038161633a26" containerName="nova-kuttl-metadata-metadata" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.085859 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="79781d2a-b011-48ea-a6ef-038161633a26" containerName="nova-kuttl-metadata-metadata" Jan 28 15:29:06 crc kubenswrapper[4893]: E0128 15:29:06.085909 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3038d018-7447-46e9-ab20-20d56a9717b6" containerName="mariadb-account-delete" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.085962 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3038d018-7447-46e9-ab20-20d56a9717b6" containerName="mariadb-account-delete" Jan 28 15:29:06 crc kubenswrapper[4893]: E0128 15:29:06.086026 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5e5f56-6f65-41e2-9d47-fe5a59541a00" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.086075 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5e5f56-6f65-41e2-9d47-fe5a59541a00" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 15:29:06 crc kubenswrapper[4893]: E0128 15:29:06.086128 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db584031-a14c-4916-a5de-767628445966" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.086176 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="db584031-a14c-4916-a5de-767628445966" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:29:06 crc kubenswrapper[4893]: E0128 15:29:06.086225 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671633bc-0311-475f-9e70-b101fa5257ad" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.086373 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="671633bc-0311-475f-9e70-b101fa5257ad" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:29:06 crc kubenswrapper[4893]: E0128 15:29:06.086434 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44befa59-1b8c-48a6-8ff8-5768d8d5f2c0" containerName="mariadb-account-delete" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.086557 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="44befa59-1b8c-48a6-8ff8-5768d8d5f2c0" containerName="mariadb-account-delete" Jan 28 15:29:06 crc kubenswrapper[4893]: E0128 15:29:06.086653 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef6e7fe-a1a0-4a6c-9c00-ba875605428b" containerName="nova-kuttl-api-api" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.086736 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef6e7fe-a1a0-4a6c-9c00-ba875605428b" containerName="nova-kuttl-api-api" Jan 28 15:29:06 crc kubenswrapper[4893]: E0128 15:29:06.086818 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c157b73b-8217-4593-bfa1-ed8b0191ec7e" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.086870 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c157b73b-8217-4593-bfa1-ed8b0191ec7e" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.087067 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="79781d2a-b011-48ea-a6ef-038161633a26" containerName="nova-kuttl-metadata-log" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.087131 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="db584031-a14c-4916-a5de-767628445966" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.087186 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="671633bc-0311-475f-9e70-b101fa5257ad" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.087240 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d26e719-2040-449b-9b38-66cc87bb9d63" containerName="mariadb-account-delete" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.087319 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3038d018-7447-46e9-ab20-20d56a9717b6" containerName="mariadb-account-delete" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.087377 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef6e7fe-a1a0-4a6c-9c00-ba875605428b" containerName="nova-kuttl-api-api" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.087436 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5e5f56-6f65-41e2-9d47-fe5a59541a00" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.087506 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef6e7fe-a1a0-4a6c-9c00-ba875605428b" containerName="nova-kuttl-api-log" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.087559 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="44befa59-1b8c-48a6-8ff8-5768d8d5f2c0" containerName="mariadb-account-delete" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.087624 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c157b73b-8217-4593-bfa1-ed8b0191ec7e" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.087694 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="79781d2a-b011-48ea-a6ef-038161633a26" containerName="nova-kuttl-metadata-metadata" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.088338 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-s7sfs" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.097737 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-s7sfs"] Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.151296 4893 generic.go:334] "Generic (PLEG): container finished" podID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" exitCode=0 Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.151376 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerDied","Data":"79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51"} Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.151437 4893 scope.go:117] "RemoveContainer" containerID="d8e5d57be71719656edc4624e7904c0b8f16b72637bcea1f2d833d180bb5c4bd" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.152010 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:29:06 crc kubenswrapper[4893]: E0128 15:29:06.152255 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.224258 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-lcqjt"] Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.225992 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-lcqjt" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.231859 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-lcqjt"] Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.297670 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-9f40-account-create-update-jlpld"] Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.298726 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-9f40-account-create-update-jlpld" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.299854 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntd6b\" (UniqueName: \"kubernetes.io/projected/3c3aa8f2-d928-410e-b3b4-57c85bba4490-kube-api-access-ntd6b\") pod \"nova-api-db-create-s7sfs\" (UID: \"3c3aa8f2-d928-410e-b3b4-57c85bba4490\") " pod="nova-kuttl-default/nova-api-db-create-s7sfs" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.300022 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c3aa8f2-d928-410e-b3b4-57c85bba4490-operator-scripts\") pod \"nova-api-db-create-s7sfs\" (UID: \"3c3aa8f2-d928-410e-b3b4-57c85bba4490\") " pod="nova-kuttl-default/nova-api-db-create-s7sfs" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.302749 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.311409 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-9f40-account-create-update-jlpld"] Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.387767 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-2gr65"] Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.389072 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-2gr65" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.399421 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-2gr65"] Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.401199 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntd6b\" (UniqueName: \"kubernetes.io/projected/3c3aa8f2-d928-410e-b3b4-57c85bba4490-kube-api-access-ntd6b\") pod \"nova-api-db-create-s7sfs\" (UID: \"3c3aa8f2-d928-410e-b3b4-57c85bba4490\") " pod="nova-kuttl-default/nova-api-db-create-s7sfs" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.401253 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0671435f-14c4-40d2-8af9-173b53e986e6-operator-scripts\") pod \"nova-cell0-db-create-lcqjt\" (UID: \"0671435f-14c4-40d2-8af9-173b53e986e6\") " pod="nova-kuttl-default/nova-cell0-db-create-lcqjt" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.401290 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqw8c\" (UniqueName: \"kubernetes.io/projected/e57926c0-c91a-4479-9440-de28827aa98f-kube-api-access-wqw8c\") pod \"nova-api-9f40-account-create-update-jlpld\" (UID: \"e57926c0-c91a-4479-9440-de28827aa98f\") " pod="nova-kuttl-default/nova-api-9f40-account-create-update-jlpld" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.401355 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57926c0-c91a-4479-9440-de28827aa98f-operator-scripts\") pod \"nova-api-9f40-account-create-update-jlpld\" (UID: \"e57926c0-c91a-4479-9440-de28827aa98f\") " pod="nova-kuttl-default/nova-api-9f40-account-create-update-jlpld" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.401901 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hk6x\" (UniqueName: \"kubernetes.io/projected/0671435f-14c4-40d2-8af9-173b53e986e6-kube-api-access-7hk6x\") pod \"nova-cell0-db-create-lcqjt\" (UID: \"0671435f-14c4-40d2-8af9-173b53e986e6\") " pod="nova-kuttl-default/nova-cell0-db-create-lcqjt" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.402247 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c3aa8f2-d928-410e-b3b4-57c85bba4490-operator-scripts\") pod \"nova-api-db-create-s7sfs\" (UID: \"3c3aa8f2-d928-410e-b3b4-57c85bba4490\") " pod="nova-kuttl-default/nova-api-db-create-s7sfs" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.403347 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c3aa8f2-d928-410e-b3b4-57c85bba4490-operator-scripts\") pod \"nova-api-db-create-s7sfs\" (UID: \"3c3aa8f2-d928-410e-b3b4-57c85bba4490\") " pod="nova-kuttl-default/nova-api-db-create-s7sfs" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.423945 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntd6b\" (UniqueName: \"kubernetes.io/projected/3c3aa8f2-d928-410e-b3b4-57c85bba4490-kube-api-access-ntd6b\") pod \"nova-api-db-create-s7sfs\" (UID: \"3c3aa8f2-d928-410e-b3b4-57c85bba4490\") " pod="nova-kuttl-default/nova-api-db-create-s7sfs" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.443042 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-s7sfs" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.498505 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b"] Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.499945 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.504135 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.504754 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0671435f-14c4-40d2-8af9-173b53e986e6-operator-scripts\") pod \"nova-cell0-db-create-lcqjt\" (UID: \"0671435f-14c4-40d2-8af9-173b53e986e6\") " pod="nova-kuttl-default/nova-cell0-db-create-lcqjt" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.504796 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqw8c\" (UniqueName: \"kubernetes.io/projected/e57926c0-c91a-4479-9440-de28827aa98f-kube-api-access-wqw8c\") pod \"nova-api-9f40-account-create-update-jlpld\" (UID: \"e57926c0-c91a-4479-9440-de28827aa98f\") " pod="nova-kuttl-default/nova-api-9f40-account-create-update-jlpld" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.504873 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57926c0-c91a-4479-9440-de28827aa98f-operator-scripts\") pod \"nova-api-9f40-account-create-update-jlpld\" (UID: \"e57926c0-c91a-4479-9440-de28827aa98f\") " pod="nova-kuttl-default/nova-api-9f40-account-create-update-jlpld" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.504905 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42l47\" (UniqueName: \"kubernetes.io/projected/0ef8b37b-ceed-44d3-9d50-f713684f2b04-kube-api-access-42l47\") pod \"nova-cell1-db-create-2gr65\" (UID: \"0ef8b37b-ceed-44d3-9d50-f713684f2b04\") " pod="nova-kuttl-default/nova-cell1-db-create-2gr65" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.504939 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hk6x\" (UniqueName: \"kubernetes.io/projected/0671435f-14c4-40d2-8af9-173b53e986e6-kube-api-access-7hk6x\") pod \"nova-cell0-db-create-lcqjt\" (UID: \"0671435f-14c4-40d2-8af9-173b53e986e6\") " pod="nova-kuttl-default/nova-cell0-db-create-lcqjt" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.505015 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef8b37b-ceed-44d3-9d50-f713684f2b04-operator-scripts\") pod \"nova-cell1-db-create-2gr65\" (UID: \"0ef8b37b-ceed-44d3-9d50-f713684f2b04\") " pod="nova-kuttl-default/nova-cell1-db-create-2gr65" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.505933 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0671435f-14c4-40d2-8af9-173b53e986e6-operator-scripts\") pod \"nova-cell0-db-create-lcqjt\" (UID: \"0671435f-14c4-40d2-8af9-173b53e986e6\") " pod="nova-kuttl-default/nova-cell0-db-create-lcqjt" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.505991 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57926c0-c91a-4479-9440-de28827aa98f-operator-scripts\") pod \"nova-api-9f40-account-create-update-jlpld\" (UID: \"e57926c0-c91a-4479-9440-de28827aa98f\") " pod="nova-kuttl-default/nova-api-9f40-account-create-update-jlpld" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.511291 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b"] Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.541848 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hk6x\" (UniqueName: \"kubernetes.io/projected/0671435f-14c4-40d2-8af9-173b53e986e6-kube-api-access-7hk6x\") pod \"nova-cell0-db-create-lcqjt\" (UID: \"0671435f-14c4-40d2-8af9-173b53e986e6\") " pod="nova-kuttl-default/nova-cell0-db-create-lcqjt" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.542629 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqw8c\" (UniqueName: \"kubernetes.io/projected/e57926c0-c91a-4479-9440-de28827aa98f-kube-api-access-wqw8c\") pod \"nova-api-9f40-account-create-update-jlpld\" (UID: \"e57926c0-c91a-4479-9440-de28827aa98f\") " pod="nova-kuttl-default/nova-api-9f40-account-create-update-jlpld" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.608458 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42l47\" (UniqueName: \"kubernetes.io/projected/0ef8b37b-ceed-44d3-9d50-f713684f2b04-kube-api-access-42l47\") pod \"nova-cell1-db-create-2gr65\" (UID: \"0ef8b37b-ceed-44d3-9d50-f713684f2b04\") " pod="nova-kuttl-default/nova-cell1-db-create-2gr65" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.608544 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a-operator-scripts\") pod \"nova-cell0-d7da-account-create-update-lt48b\" (UID: \"4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a\") " pod="nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.608579 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5xkm\" (UniqueName: \"kubernetes.io/projected/4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a-kube-api-access-r5xkm\") pod \"nova-cell0-d7da-account-create-update-lt48b\" (UID: \"4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a\") " pod="nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.608603 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef8b37b-ceed-44d3-9d50-f713684f2b04-operator-scripts\") pod \"nova-cell1-db-create-2gr65\" (UID: \"0ef8b37b-ceed-44d3-9d50-f713684f2b04\") " pod="nova-kuttl-default/nova-cell1-db-create-2gr65" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.609608 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef8b37b-ceed-44d3-9d50-f713684f2b04-operator-scripts\") pod \"nova-cell1-db-create-2gr65\" (UID: \"0ef8b37b-ceed-44d3-9d50-f713684f2b04\") " pod="nova-kuttl-default/nova-cell1-db-create-2gr65" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.612876 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-9f40-account-create-update-jlpld" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.629564 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42l47\" (UniqueName: \"kubernetes.io/projected/0ef8b37b-ceed-44d3-9d50-f713684f2b04-kube-api-access-42l47\") pod \"nova-cell1-db-create-2gr65\" (UID: \"0ef8b37b-ceed-44d3-9d50-f713684f2b04\") " pod="nova-kuttl-default/nova-cell1-db-create-2gr65" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.698016 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz"] Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.699296 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.702292 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.710419 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vrpg\" (UniqueName: \"kubernetes.io/projected/d5122cff-317d-492a-876b-f13a62d6e1db-kube-api-access-9vrpg\") pod \"nova-cell1-b9a3-account-create-update-8sbdz\" (UID: \"d5122cff-317d-492a-876b-f13a62d6e1db\") " pod="nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.710519 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5122cff-317d-492a-876b-f13a62d6e1db-operator-scripts\") pod \"nova-cell1-b9a3-account-create-update-8sbdz\" (UID: \"d5122cff-317d-492a-876b-f13a62d6e1db\") " pod="nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.710550 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a-operator-scripts\") pod \"nova-cell0-d7da-account-create-update-lt48b\" (UID: \"4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a\") " pod="nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.710584 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5xkm\" (UniqueName: \"kubernetes.io/projected/4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a-kube-api-access-r5xkm\") pod \"nova-cell0-d7da-account-create-update-lt48b\" (UID: \"4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a\") " pod="nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.711261 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a-operator-scripts\") pod \"nova-cell0-d7da-account-create-update-lt48b\" (UID: \"4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a\") " pod="nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.714123 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz"] Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.719967 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-2gr65" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.735060 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5xkm\" (UniqueName: \"kubernetes.io/projected/4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a-kube-api-access-r5xkm\") pod \"nova-cell0-d7da-account-create-update-lt48b\" (UID: \"4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a\") " pod="nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.813161 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vrpg\" (UniqueName: \"kubernetes.io/projected/d5122cff-317d-492a-876b-f13a62d6e1db-kube-api-access-9vrpg\") pod \"nova-cell1-b9a3-account-create-update-8sbdz\" (UID: \"d5122cff-317d-492a-876b-f13a62d6e1db\") " pod="nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.813828 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5122cff-317d-492a-876b-f13a62d6e1db-operator-scripts\") pod \"nova-cell1-b9a3-account-create-update-8sbdz\" (UID: \"d5122cff-317d-492a-876b-f13a62d6e1db\") " pod="nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.814819 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5122cff-317d-492a-876b-f13a62d6e1db-operator-scripts\") pod \"nova-cell1-b9a3-account-create-update-8sbdz\" (UID: \"d5122cff-317d-492a-876b-f13a62d6e1db\") " pod="nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.833146 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vrpg\" (UniqueName: \"kubernetes.io/projected/d5122cff-317d-492a-876b-f13a62d6e1db-kube-api-access-9vrpg\") pod \"nova-cell1-b9a3-account-create-update-8sbdz\" (UID: \"d5122cff-317d-492a-876b-f13a62d6e1db\") " pod="nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.839594 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.841392 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-lcqjt" Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.903699 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db584031-a14c-4916-a5de-767628445966" path="/var/lib/kubelet/pods/db584031-a14c-4916-a5de-767628445966/volumes" Jan 28 15:29:06 crc kubenswrapper[4893]: W0128 15:29:06.926275 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c3aa8f2_d928_410e_b3b4_57c85bba4490.slice/crio-8629859f6e9469a8b225b42e1418f810473af162ed1c6b1bd44e62cc21570653 WatchSource:0}: Error finding container 8629859f6e9469a8b225b42e1418f810473af162ed1c6b1bd44e62cc21570653: Status 404 returned error can't find the container with id 8629859f6e9469a8b225b42e1418f810473af162ed1c6b1bd44e62cc21570653 Jan 28 15:29:06 crc kubenswrapper[4893]: I0128 15:29:06.926494 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-s7sfs"] Jan 28 15:29:07 crc kubenswrapper[4893]: I0128 15:29:07.039851 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz" Jan 28 15:29:07 crc kubenswrapper[4893]: I0128 15:29:07.101690 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-9f40-account-create-update-jlpld"] Jan 28 15:29:07 crc kubenswrapper[4893]: W0128 15:29:07.104503 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode57926c0_c91a_4479_9440_de28827aa98f.slice/crio-f31d7a4678efdb70975837fa6b9331b1587b8c9ee7c187e23f6d2f571eb6fbaf WatchSource:0}: Error finding container f31d7a4678efdb70975837fa6b9331b1587b8c9ee7c187e23f6d2f571eb6fbaf: Status 404 returned error can't find the container with id f31d7a4678efdb70975837fa6b9331b1587b8c9ee7c187e23f6d2f571eb6fbaf Jan 28 15:29:07 crc kubenswrapper[4893]: I0128 15:29:07.171827 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-s7sfs" event={"ID":"3c3aa8f2-d928-410e-b3b4-57c85bba4490","Type":"ContainerStarted","Data":"128e4dfa038ed31cae55723b617c9fa7f98d74c221322e41d75c629953c58fbf"} Jan 28 15:29:07 crc kubenswrapper[4893]: I0128 15:29:07.172082 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-s7sfs" event={"ID":"3c3aa8f2-d928-410e-b3b4-57c85bba4490","Type":"ContainerStarted","Data":"8629859f6e9469a8b225b42e1418f810473af162ed1c6b1bd44e62cc21570653"} Jan 28 15:29:07 crc kubenswrapper[4893]: I0128 15:29:07.178140 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-9f40-account-create-update-jlpld" event={"ID":"e57926c0-c91a-4479-9440-de28827aa98f","Type":"ContainerStarted","Data":"f31d7a4678efdb70975837fa6b9331b1587b8c9ee7c187e23f6d2f571eb6fbaf"} Jan 28 15:29:07 crc kubenswrapper[4893]: I0128 15:29:07.191441 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-db-create-s7sfs" podStartSLOduration=1.191417927 podStartE2EDuration="1.191417927s" podCreationTimestamp="2026-01-28 15:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:29:07.184028126 +0000 UTC m=+1664.957643164" watchObservedRunningTime="2026-01-28 15:29:07.191417927 +0000 UTC m=+1664.965032955" Jan 28 15:29:07 crc kubenswrapper[4893]: I0128 15:29:07.284539 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-2gr65"] Jan 28 15:29:07 crc kubenswrapper[4893]: I0128 15:29:07.392293 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-lcqjt"] Jan 28 15:29:07 crc kubenswrapper[4893]: W0128 15:29:07.393850 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0671435f_14c4_40d2_8af9_173b53e986e6.slice/crio-a9bed3b1d67a0cb7eedebf640c1cc63827044a210eb0c3ef27d96fcef7611da0 WatchSource:0}: Error finding container a9bed3b1d67a0cb7eedebf640c1cc63827044a210eb0c3ef27d96fcef7611da0: Status 404 returned error can't find the container with id a9bed3b1d67a0cb7eedebf640c1cc63827044a210eb0c3ef27d96fcef7611da0 Jan 28 15:29:07 crc kubenswrapper[4893]: I0128 15:29:07.407978 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b"] Jan 28 15:29:07 crc kubenswrapper[4893]: I0128 15:29:07.577372 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz"] Jan 28 15:29:08 crc kubenswrapper[4893]: E0128 15:29:08.061093 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b9449df_d6ab_4ed3_a68c_bfe73a3ba35a.slice/crio-conmon-5eaff64b1e230f888c692be62b1a691a45657242938fc5ef0184ce86ce4d73fa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b9449df_d6ab_4ed3_a68c_bfe73a3ba35a.slice/crio-5eaff64b1e230f888c692be62b1a691a45657242938fc5ef0184ce86ce4d73fa.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.188153 4893 generic.go:334] "Generic (PLEG): container finished" podID="0671435f-14c4-40d2-8af9-173b53e986e6" containerID="d29d8d17833e2e74cdb55e289921d2faff22569fd1ea0bd607c1425baae46f20" exitCode=0 Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.188771 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-lcqjt" event={"ID":"0671435f-14c4-40d2-8af9-173b53e986e6","Type":"ContainerDied","Data":"d29d8d17833e2e74cdb55e289921d2faff22569fd1ea0bd607c1425baae46f20"} Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.188808 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-lcqjt" event={"ID":"0671435f-14c4-40d2-8af9-173b53e986e6","Type":"ContainerStarted","Data":"a9bed3b1d67a0cb7eedebf640c1cc63827044a210eb0c3ef27d96fcef7611da0"} Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.193900 4893 generic.go:334] "Generic (PLEG): container finished" podID="e57926c0-c91a-4479-9440-de28827aa98f" containerID="f6a191782dd3bee45b7a085f71c8cf9c4812c16b6828156f71574dff32d9af0b" exitCode=0 Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.194025 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-9f40-account-create-update-jlpld" event={"ID":"e57926c0-c91a-4479-9440-de28827aa98f","Type":"ContainerDied","Data":"f6a191782dd3bee45b7a085f71c8cf9c4812c16b6828156f71574dff32d9af0b"} Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.195870 4893 generic.go:334] "Generic (PLEG): container finished" podID="3c3aa8f2-d928-410e-b3b4-57c85bba4490" containerID="128e4dfa038ed31cae55723b617c9fa7f98d74c221322e41d75c629953c58fbf" exitCode=0 Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.195914 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-s7sfs" event={"ID":"3c3aa8f2-d928-410e-b3b4-57c85bba4490","Type":"ContainerDied","Data":"128e4dfa038ed31cae55723b617c9fa7f98d74c221322e41d75c629953c58fbf"} Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.198054 4893 generic.go:334] "Generic (PLEG): container finished" podID="0ef8b37b-ceed-44d3-9d50-f713684f2b04" containerID="4ffcd14bf457b6e72cb35a5f1d2cced5ecae9770d32e0c76591905938d62c424" exitCode=0 Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.198101 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-2gr65" event={"ID":"0ef8b37b-ceed-44d3-9d50-f713684f2b04","Type":"ContainerDied","Data":"4ffcd14bf457b6e72cb35a5f1d2cced5ecae9770d32e0c76591905938d62c424"} Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.198116 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-2gr65" event={"ID":"0ef8b37b-ceed-44d3-9d50-f713684f2b04","Type":"ContainerStarted","Data":"0090dea7eac375b7684f8ec5e994ffad479338644629393c18863937b613ffb7"} Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.208231 4893 generic.go:334] "Generic (PLEG): container finished" podID="d5122cff-317d-492a-876b-f13a62d6e1db" containerID="cfdfcc8472faa19a7fca4ea06a713dcd29df99efb3791cd8c029b237805ba99b" exitCode=0 Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.208351 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz" event={"ID":"d5122cff-317d-492a-876b-f13a62d6e1db","Type":"ContainerDied","Data":"cfdfcc8472faa19a7fca4ea06a713dcd29df99efb3791cd8c029b237805ba99b"} Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.208547 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz" event={"ID":"d5122cff-317d-492a-876b-f13a62d6e1db","Type":"ContainerStarted","Data":"583b5f9d3ffd132ee9be3662111996c2ff73943b883c83c291a5d6daf73ebb87"} Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.210805 4893 generic.go:334] "Generic (PLEG): container finished" podID="4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a" containerID="5eaff64b1e230f888c692be62b1a691a45657242938fc5ef0184ce86ce4d73fa" exitCode=0 Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.210908 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b" event={"ID":"4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a","Type":"ContainerDied","Data":"5eaff64b1e230f888c692be62b1a691a45657242938fc5ef0184ce86ce4d73fa"} Jan 28 15:29:08 crc kubenswrapper[4893]: I0128 15:29:08.211056 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b" event={"ID":"4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a","Type":"ContainerStarted","Data":"c09126ae7945fadeb2a49466d2b645e49ad806bb57fc78c630298ec686bb918c"} Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.514499 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-9f40-account-create-update-jlpld" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.575006 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57926c0-c91a-4479-9440-de28827aa98f-operator-scripts\") pod \"e57926c0-c91a-4479-9440-de28827aa98f\" (UID: \"e57926c0-c91a-4479-9440-de28827aa98f\") " Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.575150 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqw8c\" (UniqueName: \"kubernetes.io/projected/e57926c0-c91a-4479-9440-de28827aa98f-kube-api-access-wqw8c\") pod \"e57926c0-c91a-4479-9440-de28827aa98f\" (UID: \"e57926c0-c91a-4479-9440-de28827aa98f\") " Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.575897 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e57926c0-c91a-4479-9440-de28827aa98f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e57926c0-c91a-4479-9440-de28827aa98f" (UID: "e57926c0-c91a-4479-9440-de28827aa98f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.576591 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e57926c0-c91a-4479-9440-de28827aa98f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.594419 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57926c0-c91a-4479-9440-de28827aa98f-kube-api-access-wqw8c" (OuterVolumeSpecName: "kube-api-access-wqw8c") pod "e57926c0-c91a-4479-9440-de28827aa98f" (UID: "e57926c0-c91a-4479-9440-de28827aa98f"). InnerVolumeSpecName "kube-api-access-wqw8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.672395 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-2gr65" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.677937 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqw8c\" (UniqueName: \"kubernetes.io/projected/e57926c0-c91a-4479-9440-de28827aa98f-kube-api-access-wqw8c\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.778743 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42l47\" (UniqueName: \"kubernetes.io/projected/0ef8b37b-ceed-44d3-9d50-f713684f2b04-kube-api-access-42l47\") pod \"0ef8b37b-ceed-44d3-9d50-f713684f2b04\" (UID: \"0ef8b37b-ceed-44d3-9d50-f713684f2b04\") " Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.778873 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef8b37b-ceed-44d3-9d50-f713684f2b04-operator-scripts\") pod \"0ef8b37b-ceed-44d3-9d50-f713684f2b04\" (UID: \"0ef8b37b-ceed-44d3-9d50-f713684f2b04\") " Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.779248 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ef8b37b-ceed-44d3-9d50-f713684f2b04-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ef8b37b-ceed-44d3-9d50-f713684f2b04" (UID: "0ef8b37b-ceed-44d3-9d50-f713684f2b04"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.779773 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ef8b37b-ceed-44d3-9d50-f713684f2b04-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.782121 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef8b37b-ceed-44d3-9d50-f713684f2b04-kube-api-access-42l47" (OuterVolumeSpecName: "kube-api-access-42l47") pod "0ef8b37b-ceed-44d3-9d50-f713684f2b04" (UID: "0ef8b37b-ceed-44d3-9d50-f713684f2b04"). InnerVolumeSpecName "kube-api-access-42l47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.797192 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.804128 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-lcqjt" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.814896 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.820040 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-s7sfs" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.881116 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5122cff-317d-492a-876b-f13a62d6e1db-operator-scripts\") pod \"d5122cff-317d-492a-876b-f13a62d6e1db\" (UID: \"d5122cff-317d-492a-876b-f13a62d6e1db\") " Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.881199 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntd6b\" (UniqueName: \"kubernetes.io/projected/3c3aa8f2-d928-410e-b3b4-57c85bba4490-kube-api-access-ntd6b\") pod \"3c3aa8f2-d928-410e-b3b4-57c85bba4490\" (UID: \"3c3aa8f2-d928-410e-b3b4-57c85bba4490\") " Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.881238 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vrpg\" (UniqueName: \"kubernetes.io/projected/d5122cff-317d-492a-876b-f13a62d6e1db-kube-api-access-9vrpg\") pod \"d5122cff-317d-492a-876b-f13a62d6e1db\" (UID: \"d5122cff-317d-492a-876b-f13a62d6e1db\") " Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.881326 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c3aa8f2-d928-410e-b3b4-57c85bba4490-operator-scripts\") pod \"3c3aa8f2-d928-410e-b3b4-57c85bba4490\" (UID: \"3c3aa8f2-d928-410e-b3b4-57c85bba4490\") " Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.881385 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hk6x\" (UniqueName: \"kubernetes.io/projected/0671435f-14c4-40d2-8af9-173b53e986e6-kube-api-access-7hk6x\") pod \"0671435f-14c4-40d2-8af9-173b53e986e6\" (UID: \"0671435f-14c4-40d2-8af9-173b53e986e6\") " Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.881415 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0671435f-14c4-40d2-8af9-173b53e986e6-operator-scripts\") pod \"0671435f-14c4-40d2-8af9-173b53e986e6\" (UID: \"0671435f-14c4-40d2-8af9-173b53e986e6\") " Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.881583 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5xkm\" (UniqueName: \"kubernetes.io/projected/4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a-kube-api-access-r5xkm\") pod \"4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a\" (UID: \"4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a\") " Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.881687 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a-operator-scripts\") pod \"4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a\" (UID: \"4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a\") " Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.881779 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c3aa8f2-d928-410e-b3b4-57c85bba4490-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c3aa8f2-d928-410e-b3b4-57c85bba4490" (UID: "3c3aa8f2-d928-410e-b3b4-57c85bba4490"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.881876 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5122cff-317d-492a-876b-f13a62d6e1db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5122cff-317d-492a-876b-f13a62d6e1db" (UID: "d5122cff-317d-492a-876b-f13a62d6e1db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.882129 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0671435f-14c4-40d2-8af9-173b53e986e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0671435f-14c4-40d2-8af9-173b53e986e6" (UID: "0671435f-14c4-40d2-8af9-173b53e986e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.882362 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42l47\" (UniqueName: \"kubernetes.io/projected/0ef8b37b-ceed-44d3-9d50-f713684f2b04-kube-api-access-42l47\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.882393 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5122cff-317d-492a-876b-f13a62d6e1db-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.882410 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c3aa8f2-d928-410e-b3b4-57c85bba4490-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.882422 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0671435f-14c4-40d2-8af9-173b53e986e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.883325 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a" (UID: "4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.884598 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5122cff-317d-492a-876b-f13a62d6e1db-kube-api-access-9vrpg" (OuterVolumeSpecName: "kube-api-access-9vrpg") pod "d5122cff-317d-492a-876b-f13a62d6e1db" (UID: "d5122cff-317d-492a-876b-f13a62d6e1db"). InnerVolumeSpecName "kube-api-access-9vrpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.884947 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c3aa8f2-d928-410e-b3b4-57c85bba4490-kube-api-access-ntd6b" (OuterVolumeSpecName: "kube-api-access-ntd6b") pod "3c3aa8f2-d928-410e-b3b4-57c85bba4490" (UID: "3c3aa8f2-d928-410e-b3b4-57c85bba4490"). InnerVolumeSpecName "kube-api-access-ntd6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.885036 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a-kube-api-access-r5xkm" (OuterVolumeSpecName: "kube-api-access-r5xkm") pod "4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a" (UID: "4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a"). InnerVolumeSpecName "kube-api-access-r5xkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.885562 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0671435f-14c4-40d2-8af9-173b53e986e6-kube-api-access-7hk6x" (OuterVolumeSpecName: "kube-api-access-7hk6x") pod "0671435f-14c4-40d2-8af9-173b53e986e6" (UID: "0671435f-14c4-40d2-8af9-173b53e986e6"). InnerVolumeSpecName "kube-api-access-7hk6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.983920 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5xkm\" (UniqueName: \"kubernetes.io/projected/4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a-kube-api-access-r5xkm\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.983956 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.983966 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntd6b\" (UniqueName: \"kubernetes.io/projected/3c3aa8f2-d928-410e-b3b4-57c85bba4490-kube-api-access-ntd6b\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.983976 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vrpg\" (UniqueName: \"kubernetes.io/projected/d5122cff-317d-492a-876b-f13a62d6e1db-kube-api-access-9vrpg\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:09 crc kubenswrapper[4893]: I0128 15:29:09.983989 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hk6x\" (UniqueName: \"kubernetes.io/projected/0671435f-14c4-40d2-8af9-173b53e986e6-kube-api-access-7hk6x\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.228090 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-9f40-account-create-update-jlpld" event={"ID":"e57926c0-c91a-4479-9440-de28827aa98f","Type":"ContainerDied","Data":"f31d7a4678efdb70975837fa6b9331b1587b8c9ee7c187e23f6d2f571eb6fbaf"} Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.228139 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f31d7a4678efdb70975837fa6b9331b1587b8c9ee7c187e23f6d2f571eb6fbaf" Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.228195 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-9f40-account-create-update-jlpld" Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.231979 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-s7sfs" event={"ID":"3c3aa8f2-d928-410e-b3b4-57c85bba4490","Type":"ContainerDied","Data":"8629859f6e9469a8b225b42e1418f810473af162ed1c6b1bd44e62cc21570653"} Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.232276 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8629859f6e9469a8b225b42e1418f810473af162ed1c6b1bd44e62cc21570653" Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.232023 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-s7sfs" Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.233661 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-2gr65" event={"ID":"0ef8b37b-ceed-44d3-9d50-f713684f2b04","Type":"ContainerDied","Data":"0090dea7eac375b7684f8ec5e994ffad479338644629393c18863937b613ffb7"} Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.233691 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0090dea7eac375b7684f8ec5e994ffad479338644629393c18863937b613ffb7" Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.233759 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-2gr65" Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.235437 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz" event={"ID":"d5122cff-317d-492a-876b-f13a62d6e1db","Type":"ContainerDied","Data":"583b5f9d3ffd132ee9be3662111996c2ff73943b883c83c291a5d6daf73ebb87"} Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.235513 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="583b5f9d3ffd132ee9be3662111996c2ff73943b883c83c291a5d6daf73ebb87" Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.235583 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz" Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.239715 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b" Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.239713 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b" event={"ID":"4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a","Type":"ContainerDied","Data":"c09126ae7945fadeb2a49466d2b645e49ad806bb57fc78c630298ec686bb918c"} Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.239888 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c09126ae7945fadeb2a49466d2b645e49ad806bb57fc78c630298ec686bb918c" Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.249445 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-lcqjt" event={"ID":"0671435f-14c4-40d2-8af9-173b53e986e6","Type":"ContainerDied","Data":"a9bed3b1d67a0cb7eedebf640c1cc63827044a210eb0c3ef27d96fcef7611da0"} Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.249511 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9bed3b1d67a0cb7eedebf640c1cc63827044a210eb0c3ef27d96fcef7611da0" Jan 28 15:29:10 crc kubenswrapper[4893]: I0128 15:29:10.249593 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-lcqjt" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.813250 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk"] Jan 28 15:29:11 crc kubenswrapper[4893]: E0128 15:29:11.813701 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a" containerName="mariadb-account-create-update" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.813721 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a" containerName="mariadb-account-create-update" Jan 28 15:29:11 crc kubenswrapper[4893]: E0128 15:29:11.813747 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5122cff-317d-492a-876b-f13a62d6e1db" containerName="mariadb-account-create-update" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.813755 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5122cff-317d-492a-876b-f13a62d6e1db" containerName="mariadb-account-create-update" Jan 28 15:29:11 crc kubenswrapper[4893]: E0128 15:29:11.813771 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57926c0-c91a-4479-9440-de28827aa98f" containerName="mariadb-account-create-update" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.813780 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57926c0-c91a-4479-9440-de28827aa98f" containerName="mariadb-account-create-update" Jan 28 15:29:11 crc kubenswrapper[4893]: E0128 15:29:11.813799 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0671435f-14c4-40d2-8af9-173b53e986e6" containerName="mariadb-database-create" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.813806 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0671435f-14c4-40d2-8af9-173b53e986e6" containerName="mariadb-database-create" Jan 28 15:29:11 crc kubenswrapper[4893]: E0128 15:29:11.813819 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef8b37b-ceed-44d3-9d50-f713684f2b04" containerName="mariadb-database-create" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.813826 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef8b37b-ceed-44d3-9d50-f713684f2b04" containerName="mariadb-database-create" Jan 28 15:29:11 crc kubenswrapper[4893]: E0128 15:29:11.813839 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3aa8f2-d928-410e-b3b4-57c85bba4490" containerName="mariadb-database-create" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.813847 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3aa8f2-d928-410e-b3b4-57c85bba4490" containerName="mariadb-database-create" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.814040 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3aa8f2-d928-410e-b3b4-57c85bba4490" containerName="mariadb-database-create" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.814055 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef8b37b-ceed-44d3-9d50-f713684f2b04" containerName="mariadb-database-create" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.814065 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a" containerName="mariadb-account-create-update" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.814079 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0671435f-14c4-40d2-8af9-173b53e986e6" containerName="mariadb-database-create" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.814093 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5122cff-317d-492a-876b-f13a62d6e1db" containerName="mariadb-account-create-update" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.814106 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57926c0-c91a-4479-9440-de28827aa98f" containerName="mariadb-account-create-update" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.814844 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.816873 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.816906 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.817161 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-tb6kd" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.826993 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk"] Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.914030 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7stg\" (UniqueName: \"kubernetes.io/projected/dfc40127-55a9-4d65-9271-5b4b5d48473d-kube-api-access-w7stg\") pod \"nova-kuttl-cell0-conductor-db-sync-scjhk\" (UID: \"dfc40127-55a9-4d65-9271-5b4b5d48473d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.915582 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc40127-55a9-4d65-9271-5b4b5d48473d-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-scjhk\" (UID: \"dfc40127-55a9-4d65-9271-5b4b5d48473d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" Jan 28 15:29:11 crc kubenswrapper[4893]: I0128 15:29:11.915672 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc40127-55a9-4d65-9271-5b4b5d48473d-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-scjhk\" (UID: \"dfc40127-55a9-4d65-9271-5b4b5d48473d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" Jan 28 15:29:12 crc kubenswrapper[4893]: I0128 15:29:12.017877 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7stg\" (UniqueName: \"kubernetes.io/projected/dfc40127-55a9-4d65-9271-5b4b5d48473d-kube-api-access-w7stg\") pod \"nova-kuttl-cell0-conductor-db-sync-scjhk\" (UID: \"dfc40127-55a9-4d65-9271-5b4b5d48473d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" Jan 28 15:29:12 crc kubenswrapper[4893]: I0128 15:29:12.018087 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc40127-55a9-4d65-9271-5b4b5d48473d-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-scjhk\" (UID: \"dfc40127-55a9-4d65-9271-5b4b5d48473d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" Jan 28 15:29:12 crc kubenswrapper[4893]: I0128 15:29:12.018133 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc40127-55a9-4d65-9271-5b4b5d48473d-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-scjhk\" (UID: \"dfc40127-55a9-4d65-9271-5b4b5d48473d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" Jan 28 15:29:12 crc kubenswrapper[4893]: I0128 15:29:12.025123 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc40127-55a9-4d65-9271-5b4b5d48473d-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-scjhk\" (UID: \"dfc40127-55a9-4d65-9271-5b4b5d48473d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" Jan 28 15:29:12 crc kubenswrapper[4893]: I0128 15:29:12.026467 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc40127-55a9-4d65-9271-5b4b5d48473d-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-scjhk\" (UID: \"dfc40127-55a9-4d65-9271-5b4b5d48473d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" Jan 28 15:29:12 crc kubenswrapper[4893]: I0128 15:29:12.036048 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7stg\" (UniqueName: \"kubernetes.io/projected/dfc40127-55a9-4d65-9271-5b4b5d48473d-kube-api-access-w7stg\") pod \"nova-kuttl-cell0-conductor-db-sync-scjhk\" (UID: \"dfc40127-55a9-4d65-9271-5b4b5d48473d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" Jan 28 15:29:12 crc kubenswrapper[4893]: I0128 15:29:12.179812 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" Jan 28 15:29:12 crc kubenswrapper[4893]: I0128 15:29:12.615759 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk"] Jan 28 15:29:12 crc kubenswrapper[4893]: W0128 15:29:12.625331 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfc40127_55a9_4d65_9271_5b4b5d48473d.slice/crio-8dc0947087821baa9ae58c967e54afc95750a8d6d19b649cb4a1131752ed32c2 WatchSource:0}: Error finding container 8dc0947087821baa9ae58c967e54afc95750a8d6d19b649cb4a1131752ed32c2: Status 404 returned error can't find the container with id 8dc0947087821baa9ae58c967e54afc95750a8d6d19b649cb4a1131752ed32c2 Jan 28 15:29:13 crc kubenswrapper[4893]: I0128 15:29:13.270579 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" event={"ID":"dfc40127-55a9-4d65-9271-5b4b5d48473d","Type":"ContainerStarted","Data":"d9c418afbeb3b342d8024d1e60d149ef61e9073d0920bf25b1e00f8f7a86528b"} Jan 28 15:29:13 crc kubenswrapper[4893]: I0128 15:29:13.270810 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" event={"ID":"dfc40127-55a9-4d65-9271-5b4b5d48473d","Type":"ContainerStarted","Data":"8dc0947087821baa9ae58c967e54afc95750a8d6d19b649cb4a1131752ed32c2"} Jan 28 15:29:13 crc kubenswrapper[4893]: I0128 15:29:13.290211 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" podStartSLOduration=2.290184132 podStartE2EDuration="2.290184132s" podCreationTimestamp="2026-01-28 15:29:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:29:13.285518175 +0000 UTC m=+1671.059133213" watchObservedRunningTime="2026-01-28 15:29:13.290184132 +0000 UTC m=+1671.063799160" Jan 28 15:29:18 crc kubenswrapper[4893]: I0128 15:29:18.325255 4893 generic.go:334] "Generic (PLEG): container finished" podID="dfc40127-55a9-4d65-9271-5b4b5d48473d" containerID="d9c418afbeb3b342d8024d1e60d149ef61e9073d0920bf25b1e00f8f7a86528b" exitCode=0 Jan 28 15:29:18 crc kubenswrapper[4893]: I0128 15:29:18.325328 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" event={"ID":"dfc40127-55a9-4d65-9271-5b4b5d48473d","Type":"ContainerDied","Data":"d9c418afbeb3b342d8024d1e60d149ef61e9073d0920bf25b1e00f8f7a86528b"} Jan 28 15:29:18 crc kubenswrapper[4893]: I0128 15:29:18.892209 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:29:18 crc kubenswrapper[4893]: E0128 15:29:18.892732 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:29:19 crc kubenswrapper[4893]: I0128 15:29:19.661416 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" Jan 28 15:29:19 crc kubenswrapper[4893]: I0128 15:29:19.748643 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7stg\" (UniqueName: \"kubernetes.io/projected/dfc40127-55a9-4d65-9271-5b4b5d48473d-kube-api-access-w7stg\") pod \"dfc40127-55a9-4d65-9271-5b4b5d48473d\" (UID: \"dfc40127-55a9-4d65-9271-5b4b5d48473d\") " Jan 28 15:29:19 crc kubenswrapper[4893]: I0128 15:29:19.748753 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc40127-55a9-4d65-9271-5b4b5d48473d-scripts\") pod \"dfc40127-55a9-4d65-9271-5b4b5d48473d\" (UID: \"dfc40127-55a9-4d65-9271-5b4b5d48473d\") " Jan 28 15:29:19 crc kubenswrapper[4893]: I0128 15:29:19.748845 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc40127-55a9-4d65-9271-5b4b5d48473d-config-data\") pod \"dfc40127-55a9-4d65-9271-5b4b5d48473d\" (UID: \"dfc40127-55a9-4d65-9271-5b4b5d48473d\") " Jan 28 15:29:19 crc kubenswrapper[4893]: I0128 15:29:19.755602 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc40127-55a9-4d65-9271-5b4b5d48473d-kube-api-access-w7stg" (OuterVolumeSpecName: "kube-api-access-w7stg") pod "dfc40127-55a9-4d65-9271-5b4b5d48473d" (UID: "dfc40127-55a9-4d65-9271-5b4b5d48473d"). InnerVolumeSpecName "kube-api-access-w7stg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:19 crc kubenswrapper[4893]: I0128 15:29:19.757983 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc40127-55a9-4d65-9271-5b4b5d48473d-scripts" (OuterVolumeSpecName: "scripts") pod "dfc40127-55a9-4d65-9271-5b4b5d48473d" (UID: "dfc40127-55a9-4d65-9271-5b4b5d48473d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:19 crc kubenswrapper[4893]: I0128 15:29:19.776879 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc40127-55a9-4d65-9271-5b4b5d48473d-config-data" (OuterVolumeSpecName: "config-data") pod "dfc40127-55a9-4d65-9271-5b4b5d48473d" (UID: "dfc40127-55a9-4d65-9271-5b4b5d48473d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:19 crc kubenswrapper[4893]: I0128 15:29:19.851126 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7stg\" (UniqueName: \"kubernetes.io/projected/dfc40127-55a9-4d65-9271-5b4b5d48473d-kube-api-access-w7stg\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:19 crc kubenswrapper[4893]: I0128 15:29:19.851177 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfc40127-55a9-4d65-9271-5b4b5d48473d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:19 crc kubenswrapper[4893]: I0128 15:29:19.851190 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc40127-55a9-4d65-9271-5b4b5d48473d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.346655 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" event={"ID":"dfc40127-55a9-4d65-9271-5b4b5d48473d","Type":"ContainerDied","Data":"8dc0947087821baa9ae58c967e54afc95750a8d6d19b649cb4a1131752ed32c2"} Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.346703 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dc0947087821baa9ae58c967e54afc95750a8d6d19b649cb4a1131752ed32c2" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.346858 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.424964 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:29:20 crc kubenswrapper[4893]: E0128 15:29:20.425345 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc40127-55a9-4d65-9271-5b4b5d48473d" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.425362 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc40127-55a9-4d65-9271-5b4b5d48473d" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.425547 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc40127-55a9-4d65-9271-5b4b5d48473d" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.426069 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.429487 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-tb6kd" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.429634 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.440237 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.563317 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e0172a-6fe8-43d0-97ba-3ea57089d58d-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"87e0172a-6fe8-43d0-97ba-3ea57089d58d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.563403 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrgx5\" (UniqueName: \"kubernetes.io/projected/87e0172a-6fe8-43d0-97ba-3ea57089d58d-kube-api-access-jrgx5\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"87e0172a-6fe8-43d0-97ba-3ea57089d58d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.665306 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrgx5\" (UniqueName: \"kubernetes.io/projected/87e0172a-6fe8-43d0-97ba-3ea57089d58d-kube-api-access-jrgx5\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"87e0172a-6fe8-43d0-97ba-3ea57089d58d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.665526 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e0172a-6fe8-43d0-97ba-3ea57089d58d-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"87e0172a-6fe8-43d0-97ba-3ea57089d58d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.670697 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e0172a-6fe8-43d0-97ba-3ea57089d58d-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"87e0172a-6fe8-43d0-97ba-3ea57089d58d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.692372 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrgx5\" (UniqueName: \"kubernetes.io/projected/87e0172a-6fe8-43d0-97ba-3ea57089d58d-kube-api-access-jrgx5\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"87e0172a-6fe8-43d0-97ba-3ea57089d58d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:29:20 crc kubenswrapper[4893]: I0128 15:29:20.741872 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:29:21 crc kubenswrapper[4893]: I0128 15:29:21.218547 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:29:21 crc kubenswrapper[4893]: I0128 15:29:21.355890 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"87e0172a-6fe8-43d0-97ba-3ea57089d58d","Type":"ContainerStarted","Data":"679fc33aafa7bdebce9f72bff8d69d22b9916ff9d9122d78952232d7ca374755"} Jan 28 15:29:22 crc kubenswrapper[4893]: I0128 15:29:22.369939 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"87e0172a-6fe8-43d0-97ba-3ea57089d58d","Type":"ContainerStarted","Data":"ba76a74c5e15d214d9ff5666bdead74b3e974634b822003b81259ff501b75995"} Jan 28 15:29:22 crc kubenswrapper[4893]: I0128 15:29:22.370178 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:29:22 crc kubenswrapper[4893]: I0128 15:29:22.395636 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.395611703 podStartE2EDuration="2.395611703s" podCreationTimestamp="2026-01-28 15:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:29:22.393968469 +0000 UTC m=+1680.167583497" watchObservedRunningTime="2026-01-28 15:29:22.395611703 +0000 UTC m=+1680.169226731" Jan 28 15:29:30 crc kubenswrapper[4893]: I0128 15:29:30.767526 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:29:30 crc kubenswrapper[4893]: I0128 15:29:30.892150 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:29:30 crc kubenswrapper[4893]: E0128 15:29:30.892717 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.194209 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9"] Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.195820 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.198266 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.199496 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.207765 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9"] Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.344593 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c67810-713f-40ef-a19c-f7a726b17271-config-data\") pod \"nova-kuttl-cell0-cell-mapping-wnfq9\" (UID: \"b7c67810-713f-40ef-a19c-f7a726b17271\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.344956 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pg2z\" (UniqueName: \"kubernetes.io/projected/b7c67810-713f-40ef-a19c-f7a726b17271-kube-api-access-6pg2z\") pod \"nova-kuttl-cell0-cell-mapping-wnfq9\" (UID: \"b7c67810-713f-40ef-a19c-f7a726b17271\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.345100 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c67810-713f-40ef-a19c-f7a726b17271-scripts\") pod \"nova-kuttl-cell0-cell-mapping-wnfq9\" (UID: \"b7c67810-713f-40ef-a19c-f7a726b17271\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.446797 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c67810-713f-40ef-a19c-f7a726b17271-config-data\") pod \"nova-kuttl-cell0-cell-mapping-wnfq9\" (UID: \"b7c67810-713f-40ef-a19c-f7a726b17271\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.446999 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pg2z\" (UniqueName: \"kubernetes.io/projected/b7c67810-713f-40ef-a19c-f7a726b17271-kube-api-access-6pg2z\") pod \"nova-kuttl-cell0-cell-mapping-wnfq9\" (UID: \"b7c67810-713f-40ef-a19c-f7a726b17271\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.447055 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c67810-713f-40ef-a19c-f7a726b17271-scripts\") pod \"nova-kuttl-cell0-cell-mapping-wnfq9\" (UID: \"b7c67810-713f-40ef-a19c-f7a726b17271\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.455846 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c67810-713f-40ef-a19c-f7a726b17271-scripts\") pod \"nova-kuttl-cell0-cell-mapping-wnfq9\" (UID: \"b7c67810-713f-40ef-a19c-f7a726b17271\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.467723 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c67810-713f-40ef-a19c-f7a726b17271-config-data\") pod \"nova-kuttl-cell0-cell-mapping-wnfq9\" (UID: \"b7c67810-713f-40ef-a19c-f7a726b17271\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.490600 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pg2z\" (UniqueName: \"kubernetes.io/projected/b7c67810-713f-40ef-a19c-f7a726b17271-kube-api-access-6pg2z\") pod \"nova-kuttl-cell0-cell-mapping-wnfq9\" (UID: \"b7c67810-713f-40ef-a19c-f7a726b17271\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.514321 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.514601 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.518501 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.522301 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.552987 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-config-data\") pod \"nova-kuttl-api-0\" (UID: \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.553283 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptnd5\" (UniqueName: \"kubernetes.io/projected/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-kube-api-access-ptnd5\") pod \"nova-kuttl-api-0\" (UID: \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.553338 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-logs\") pod \"nova-kuttl-api-0\" (UID: \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.558032 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.596658 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.597813 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.605357 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.606612 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.607823 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.611011 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.655400 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-config-data\") pod \"nova-kuttl-api-0\" (UID: \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.655492 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptnd5\" (UniqueName: \"kubernetes.io/projected/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-kube-api-access-ptnd5\") pod \"nova-kuttl-api-0\" (UID: \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.655554 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-logs\") pod \"nova-kuttl-api-0\" (UID: \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.656119 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-logs\") pod \"nova-kuttl-api-0\" (UID: \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.661199 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-config-data\") pod \"nova-kuttl-api-0\" (UID: \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.677731 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.695726 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptnd5\" (UniqueName: \"kubernetes.io/projected/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-kube-api-access-ptnd5\") pod \"nova-kuttl-api-0\" (UID: \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.710509 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.743699 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.747059 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.750207 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.757360 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87tpz\" (UniqueName: \"kubernetes.io/projected/80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d-kube-api-access-87tpz\") pod \"nova-kuttl-scheduler-0\" (UID: \"80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.757506 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drksj\" (UniqueName: \"kubernetes.io/projected/3a4152d8-cd1c-478b-977c-3542b4ccf601-kube-api-access-drksj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"3a4152d8-cd1c-478b-977c-3542b4ccf601\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.757590 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4152d8-cd1c-478b-977c-3542b4ccf601-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"3a4152d8-cd1c-478b-977c-3542b4ccf601\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.757636 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.769532 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.858748 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4152d8-cd1c-478b-977c-3542b4ccf601-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"3a4152d8-cd1c-478b-977c-3542b4ccf601\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.859021 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.859046 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.859087 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87tpz\" (UniqueName: \"kubernetes.io/projected/80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d-kube-api-access-87tpz\") pod \"nova-kuttl-scheduler-0\" (UID: \"80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.859115 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.859148 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfhrm\" (UniqueName: \"kubernetes.io/projected/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-kube-api-access-qfhrm\") pod \"nova-kuttl-metadata-0\" (UID: \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.859179 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drksj\" (UniqueName: \"kubernetes.io/projected/3a4152d8-cd1c-478b-977c-3542b4ccf601-kube-api-access-drksj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"3a4152d8-cd1c-478b-977c-3542b4ccf601\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.869435 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4152d8-cd1c-478b-977c-3542b4ccf601-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"3a4152d8-cd1c-478b-977c-3542b4ccf601\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.871962 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.876350 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drksj\" (UniqueName: \"kubernetes.io/projected/3a4152d8-cd1c-478b-977c-3542b4ccf601-kube-api-access-drksj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"3a4152d8-cd1c-478b-977c-3542b4ccf601\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.879045 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87tpz\" (UniqueName: \"kubernetes.io/projected/80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d-kube-api-access-87tpz\") pod \"nova-kuttl-scheduler-0\" (UID: \"80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.939657 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.962870 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.962946 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.962984 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfhrm\" (UniqueName: \"kubernetes.io/projected/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-kube-api-access-qfhrm\") pod \"nova-kuttl-metadata-0\" (UID: \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.964145 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.967376 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:31 crc kubenswrapper[4893]: I0128 15:29:31.981945 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfhrm\" (UniqueName: \"kubernetes.io/projected/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-kube-api-access-qfhrm\") pod \"nova-kuttl-metadata-0\" (UID: \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.041847 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.057765 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.077468 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.152802 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9"] Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.310029 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf"] Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.312880 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.315955 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.317231 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.328309 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf"] Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.403631 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.471907 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jnmc\" (UniqueName: \"kubernetes.io/projected/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-kube-api-access-9jnmc\") pod \"nova-kuttl-cell1-conductor-db-sync-xclqf\" (UID: \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.471951 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-xclqf\" (UID: \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.473403 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-xclqf\" (UID: \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.480492 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"74ce4d2d-762a-40b6-99fd-b56ae540b1c8","Type":"ContainerStarted","Data":"17750e96091b2d6ed521cc4cd8eb4b1efd3fb07f0fdcce805900939d6b17c5ec"} Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.483624 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" event={"ID":"b7c67810-713f-40ef-a19c-f7a726b17271","Type":"ContainerStarted","Data":"e4560f36097d50df65a49ea433024128a61589fff0ee9f5b798fd465d2c1e167"} Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.575559 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-xclqf\" (UID: \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.575635 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jnmc\" (UniqueName: \"kubernetes.io/projected/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-kube-api-access-9jnmc\") pod \"nova-kuttl-cell1-conductor-db-sync-xclqf\" (UID: \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.575667 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-xclqf\" (UID: \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.582820 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-xclqf\" (UID: \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.583927 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-xclqf\" (UID: \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.594222 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jnmc\" (UniqueName: \"kubernetes.io/projected/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-kube-api-access-9jnmc\") pod \"nova-kuttl-cell1-conductor-db-sync-xclqf\" (UID: \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" Jan 28 15:29:32 crc kubenswrapper[4893]: W0128 15:29:32.596692 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80a4ed1a_6d11_4cdd_b2bb_23cd7b1e492d.slice/crio-973c0e658f4c4c483716efb32eeb8160557605d65ac22572db86fb4cff25d19e WatchSource:0}: Error finding container 973c0e658f4c4c483716efb32eeb8160557605d65ac22572db86fb4cff25d19e: Status 404 returned error can't find the container with id 973c0e658f4c4c483716efb32eeb8160557605d65ac22572db86fb4cff25d19e Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.600758 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:29:32 crc kubenswrapper[4893]: W0128 15:29:32.602384 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a4152d8_cd1c_478b_977c_3542b4ccf601.slice/crio-d1c961d990fc579efe3d4f50e2027896cf4ba1246d3ad30e73f0054ed48f0eeb WatchSource:0}: Error finding container d1c961d990fc579efe3d4f50e2027896cf4ba1246d3ad30e73f0054ed48f0eeb: Status 404 returned error can't find the container with id d1c961d990fc579efe3d4f50e2027896cf4ba1246d3ad30e73f0054ed48f0eeb Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.612843 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.640773 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" Jan 28 15:29:32 crc kubenswrapper[4893]: I0128 15:29:32.706093 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:29:32 crc kubenswrapper[4893]: W0128 15:29:32.725164 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3d06e6b_7f1a_4c58_be65_efff8dfe3d15.slice/crio-ca9753b41d6dd2a73c13ab15a76307ae7898da5ca822bcefc55457bd0056d21c WatchSource:0}: Error finding container ca9753b41d6dd2a73c13ab15a76307ae7898da5ca822bcefc55457bd0056d21c: Status 404 returned error can't find the container with id ca9753b41d6dd2a73c13ab15a76307ae7898da5ca822bcefc55457bd0056d21c Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.165778 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf"] Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.520112 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"3a4152d8-cd1c-478b-977c-3542b4ccf601","Type":"ContainerStarted","Data":"db0c2cca79c16e5b10bee79aa8e77ced55b5a7abcd299e5cab7c07470cdd2f4d"} Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.520462 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"3a4152d8-cd1c-478b-977c-3542b4ccf601","Type":"ContainerStarted","Data":"d1c961d990fc579efe3d4f50e2027896cf4ba1246d3ad30e73f0054ed48f0eeb"} Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.528777 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d","Type":"ContainerStarted","Data":"7afe546e59039d9e982b04cd67336483a8a4ad4c1af2b8c09d04c94f208aa244"} Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.528834 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d","Type":"ContainerStarted","Data":"973c0e658f4c4c483716efb32eeb8160557605d65ac22572db86fb4cff25d19e"} Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.531713 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"74ce4d2d-762a-40b6-99fd-b56ae540b1c8","Type":"ContainerStarted","Data":"5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724"} Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.531757 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"74ce4d2d-762a-40b6-99fd-b56ae540b1c8","Type":"ContainerStarted","Data":"9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5"} Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.550142 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" event={"ID":"b7c67810-713f-40ef-a19c-f7a726b17271","Type":"ContainerStarted","Data":"76586a1d37703fd75294f9a31a07e8090d51f7065ae6a2446cd571869a855ada"} Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.556284 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" event={"ID":"ed8525f9-d3bf-452b-bd30-a60e65e32d7d","Type":"ContainerStarted","Data":"192ae4f21b4990f917167a00f8af309e685591fd51c8f296a00e2395efcab31b"} Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.556359 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" event={"ID":"ed8525f9-d3bf-452b-bd30-a60e65e32d7d","Type":"ContainerStarted","Data":"76008f58badb23ed5fe48a0bcaaaaa3903db568879de7b67bf538d8a44dcc9f3"} Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.557525 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.557498694 podStartE2EDuration="2.557498694s" podCreationTimestamp="2026-01-28 15:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:29:33.540977786 +0000 UTC m=+1691.314592814" watchObservedRunningTime="2026-01-28 15:29:33.557498694 +0000 UTC m=+1691.331113722" Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.573901 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15","Type":"ContainerStarted","Data":"44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929"} Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.573969 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15","Type":"ContainerStarted","Data":"3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9"} Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.573984 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15","Type":"ContainerStarted","Data":"ca9753b41d6dd2a73c13ab15a76307ae7898da5ca822bcefc55457bd0056d21c"} Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.574097 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.574071424 podStartE2EDuration="2.574071424s" podCreationTimestamp="2026-01-28 15:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:29:33.563001734 +0000 UTC m=+1691.336616762" watchObservedRunningTime="2026-01-28 15:29:33.574071424 +0000 UTC m=+1691.347686462" Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.593547 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.59352062 podStartE2EDuration="2.59352062s" podCreationTimestamp="2026-01-28 15:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:29:33.591909297 +0000 UTC m=+1691.365524345" watchObservedRunningTime="2026-01-28 15:29:33.59352062 +0000 UTC m=+1691.367135648" Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.642741 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" podStartSLOduration=2.642723206 podStartE2EDuration="2.642723206s" podCreationTimestamp="2026-01-28 15:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:29:33.64138869 +0000 UTC m=+1691.415003728" watchObservedRunningTime="2026-01-28 15:29:33.642723206 +0000 UTC m=+1691.416338234" Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.644206 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" podStartSLOduration=1.6441989860000001 podStartE2EDuration="1.644198986s" podCreationTimestamp="2026-01-28 15:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:29:33.622816646 +0000 UTC m=+1691.396431674" watchObservedRunningTime="2026-01-28 15:29:33.644198986 +0000 UTC m=+1691.417814014" Jan 28 15:29:33 crc kubenswrapper[4893]: I0128 15:29:33.664977 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.664957049 podStartE2EDuration="2.664957049s" podCreationTimestamp="2026-01-28 15:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:29:33.659981023 +0000 UTC m=+1691.433596052" watchObservedRunningTime="2026-01-28 15:29:33.664957049 +0000 UTC m=+1691.438572077" Jan 28 15:29:36 crc kubenswrapper[4893]: I0128 15:29:36.601880 4893 generic.go:334] "Generic (PLEG): container finished" podID="ed8525f9-d3bf-452b-bd30-a60e65e32d7d" containerID="192ae4f21b4990f917167a00f8af309e685591fd51c8f296a00e2395efcab31b" exitCode=0 Jan 28 15:29:36 crc kubenswrapper[4893]: I0128 15:29:36.601979 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" event={"ID":"ed8525f9-d3bf-452b-bd30-a60e65e32d7d","Type":"ContainerDied","Data":"192ae4f21b4990f917167a00f8af309e685591fd51c8f296a00e2395efcab31b"} Jan 28 15:29:37 crc kubenswrapper[4893]: I0128 15:29:37.042438 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:29:37 crc kubenswrapper[4893]: I0128 15:29:37.058541 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:37 crc kubenswrapper[4893]: I0128 15:29:37.078070 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:37 crc kubenswrapper[4893]: I0128 15:29:37.078927 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:37 crc kubenswrapper[4893]: I0128 15:29:37.972225 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.115901 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-config-data\") pod \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\" (UID: \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\") " Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.116026 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jnmc\" (UniqueName: \"kubernetes.io/projected/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-kube-api-access-9jnmc\") pod \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\" (UID: \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\") " Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.116102 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-scripts\") pod \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\" (UID: \"ed8525f9-d3bf-452b-bd30-a60e65e32d7d\") " Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.122329 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-kube-api-access-9jnmc" (OuterVolumeSpecName: "kube-api-access-9jnmc") pod "ed8525f9-d3bf-452b-bd30-a60e65e32d7d" (UID: "ed8525f9-d3bf-452b-bd30-a60e65e32d7d"). InnerVolumeSpecName "kube-api-access-9jnmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.122585 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-scripts" (OuterVolumeSpecName: "scripts") pod "ed8525f9-d3bf-452b-bd30-a60e65e32d7d" (UID: "ed8525f9-d3bf-452b-bd30-a60e65e32d7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.144086 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-config-data" (OuterVolumeSpecName: "config-data") pod "ed8525f9-d3bf-452b-bd30-a60e65e32d7d" (UID: "ed8525f9-d3bf-452b-bd30-a60e65e32d7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.218623 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.218658 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jnmc\" (UniqueName: \"kubernetes.io/projected/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-kube-api-access-9jnmc\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.218669 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed8525f9-d3bf-452b-bd30-a60e65e32d7d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.621462 4893 generic.go:334] "Generic (PLEG): container finished" podID="b7c67810-713f-40ef-a19c-f7a726b17271" containerID="76586a1d37703fd75294f9a31a07e8090d51f7065ae6a2446cd571869a855ada" exitCode=0 Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.621504 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" event={"ID":"b7c67810-713f-40ef-a19c-f7a726b17271","Type":"ContainerDied","Data":"76586a1d37703fd75294f9a31a07e8090d51f7065ae6a2446cd571869a855ada"} Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.625207 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" event={"ID":"ed8525f9-d3bf-452b-bd30-a60e65e32d7d","Type":"ContainerDied","Data":"76008f58badb23ed5fe48a0bcaaaaa3903db568879de7b67bf538d8a44dcc9f3"} Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.625270 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76008f58badb23ed5fe48a0bcaaaaa3903db568879de7b67bf538d8a44dcc9f3" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.625293 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.702363 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:29:38 crc kubenswrapper[4893]: E0128 15:29:38.703021 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8525f9-d3bf-452b-bd30-a60e65e32d7d" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.703042 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8525f9-d3bf-452b-bd30-a60e65e32d7d" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.703185 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed8525f9-d3bf-452b-bd30-a60e65e32d7d" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.703743 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.708869 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.713298 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.830215 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnpwt\" (UniqueName: \"kubernetes.io/projected/6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f-kube-api-access-bnpwt\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.830273 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.931785 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnpwt\" (UniqueName: \"kubernetes.io/projected/6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f-kube-api-access-bnpwt\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.931832 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.937669 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:29:38 crc kubenswrapper[4893]: I0128 15:29:38.951503 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnpwt\" (UniqueName: \"kubernetes.io/projected/6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f-kube-api-access-bnpwt\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:29:39 crc kubenswrapper[4893]: I0128 15:29:39.022202 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:29:39 crc kubenswrapper[4893]: I0128 15:29:39.448126 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:29:39 crc kubenswrapper[4893]: I0128 15:29:39.636740 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f","Type":"ContainerStarted","Data":"df29e68cc60a3c12a95d57e15180d096837b10f3e05df9dbc7b84c86eb96d3fe"} Jan 28 15:29:39 crc kubenswrapper[4893]: I0128 15:29:39.637113 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f","Type":"ContainerStarted","Data":"c2dd1200bd7422eda215b044e929d3b92a746a7ebc095161e06ee52c31f10390"} Jan 28 15:29:39 crc kubenswrapper[4893]: I0128 15:29:39.637301 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:29:39 crc kubenswrapper[4893]: I0128 15:29:39.671832 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=1.671809932 podStartE2EDuration="1.671809932s" podCreationTimestamp="2026-01-28 15:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:29:39.667123335 +0000 UTC m=+1697.440738383" watchObservedRunningTime="2026-01-28 15:29:39.671809932 +0000 UTC m=+1697.445424960" Jan 28 15:29:39 crc kubenswrapper[4893]: I0128 15:29:39.925523 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.050496 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c67810-713f-40ef-a19c-f7a726b17271-config-data\") pod \"b7c67810-713f-40ef-a19c-f7a726b17271\" (UID: \"b7c67810-713f-40ef-a19c-f7a726b17271\") " Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.050556 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c67810-713f-40ef-a19c-f7a726b17271-scripts\") pod \"b7c67810-713f-40ef-a19c-f7a726b17271\" (UID: \"b7c67810-713f-40ef-a19c-f7a726b17271\") " Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.050764 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pg2z\" (UniqueName: \"kubernetes.io/projected/b7c67810-713f-40ef-a19c-f7a726b17271-kube-api-access-6pg2z\") pod \"b7c67810-713f-40ef-a19c-f7a726b17271\" (UID: \"b7c67810-713f-40ef-a19c-f7a726b17271\") " Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.055939 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c67810-713f-40ef-a19c-f7a726b17271-kube-api-access-6pg2z" (OuterVolumeSpecName: "kube-api-access-6pg2z") pod "b7c67810-713f-40ef-a19c-f7a726b17271" (UID: "b7c67810-713f-40ef-a19c-f7a726b17271"). InnerVolumeSpecName "kube-api-access-6pg2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.056169 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c67810-713f-40ef-a19c-f7a726b17271-scripts" (OuterVolumeSpecName: "scripts") pod "b7c67810-713f-40ef-a19c-f7a726b17271" (UID: "b7c67810-713f-40ef-a19c-f7a726b17271"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.074695 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c67810-713f-40ef-a19c-f7a726b17271-config-data" (OuterVolumeSpecName: "config-data") pod "b7c67810-713f-40ef-a19c-f7a726b17271" (UID: "b7c67810-713f-40ef-a19c-f7a726b17271"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.153295 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pg2z\" (UniqueName: \"kubernetes.io/projected/b7c67810-713f-40ef-a19c-f7a726b17271-kube-api-access-6pg2z\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.153340 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7c67810-713f-40ef-a19c-f7a726b17271-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.153350 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7c67810-713f-40ef-a19c-f7a726b17271-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.650243 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" event={"ID":"b7c67810-713f-40ef-a19c-f7a726b17271","Type":"ContainerDied","Data":"e4560f36097d50df65a49ea433024128a61589fff0ee9f5b798fd465d2c1e167"} Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.650610 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4560f36097d50df65a49ea433024128a61589fff0ee9f5b798fd465d2c1e167" Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.650328 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9" Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.828538 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.828818 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="74ce4d2d-762a-40b6-99fd-b56ae540b1c8" containerName="nova-kuttl-api-log" containerID="cri-o://9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5" gracePeriod=30 Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.829398 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="74ce4d2d-762a-40b6-99fd-b56ae540b1c8" containerName="nova-kuttl-api-api" containerID="cri-o://5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724" gracePeriod=30 Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.852954 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.853170 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://7afe546e59039d9e982b04cd67336483a8a4ad4c1af2b8c09d04c94f208aa244" gracePeriod=30 Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.946707 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.947648 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="a3d06e6b-7f1a-4c58-be65-efff8dfe3d15" containerName="nova-kuttl-metadata-log" containerID="cri-o://3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9" gracePeriod=30 Jan 28 15:29:40 crc kubenswrapper[4893]: I0128 15:29:40.947777 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="a3d06e6b-7f1a-4c58-be65-efff8dfe3d15" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929" gracePeriod=30 Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.451114 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.520821 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.576192 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-config-data\") pod \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\" (UID: \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\") " Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.576433 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptnd5\" (UniqueName: \"kubernetes.io/projected/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-kube-api-access-ptnd5\") pod \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\" (UID: \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\") " Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.576550 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-logs\") pod \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\" (UID: \"74ce4d2d-762a-40b6-99fd-b56ae540b1c8\") " Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.577313 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-logs" (OuterVolumeSpecName: "logs") pod "74ce4d2d-762a-40b6-99fd-b56ae540b1c8" (UID: "74ce4d2d-762a-40b6-99fd-b56ae540b1c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.582873 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-kube-api-access-ptnd5" (OuterVolumeSpecName: "kube-api-access-ptnd5") pod "74ce4d2d-762a-40b6-99fd-b56ae540b1c8" (UID: "74ce4d2d-762a-40b6-99fd-b56ae540b1c8"). InnerVolumeSpecName "kube-api-access-ptnd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.603372 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-config-data" (OuterVolumeSpecName: "config-data") pod "74ce4d2d-762a-40b6-99fd-b56ae540b1c8" (UID: "74ce4d2d-762a-40b6-99fd-b56ae540b1c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.661537 4893 generic.go:334] "Generic (PLEG): container finished" podID="a3d06e6b-7f1a-4c58-be65-efff8dfe3d15" containerID="44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929" exitCode=0 Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.661584 4893 generic.go:334] "Generic (PLEG): container finished" podID="a3d06e6b-7f1a-4c58-be65-efff8dfe3d15" containerID="3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9" exitCode=143 Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.661611 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.661639 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15","Type":"ContainerDied","Data":"44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929"} Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.661711 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15","Type":"ContainerDied","Data":"3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9"} Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.661730 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15","Type":"ContainerDied","Data":"ca9753b41d6dd2a73c13ab15a76307ae7898da5ca822bcefc55457bd0056d21c"} Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.661754 4893 scope.go:117] "RemoveContainer" containerID="44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.667142 4893 generic.go:334] "Generic (PLEG): container finished" podID="74ce4d2d-762a-40b6-99fd-b56ae540b1c8" containerID="5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724" exitCode=0 Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.667184 4893 generic.go:334] "Generic (PLEG): container finished" podID="74ce4d2d-762a-40b6-99fd-b56ae540b1c8" containerID="9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5" exitCode=143 Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.667210 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"74ce4d2d-762a-40b6-99fd-b56ae540b1c8","Type":"ContainerDied","Data":"5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724"} Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.667274 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"74ce4d2d-762a-40b6-99fd-b56ae540b1c8","Type":"ContainerDied","Data":"9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5"} Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.667297 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"74ce4d2d-762a-40b6-99fd-b56ae540b1c8","Type":"ContainerDied","Data":"17750e96091b2d6ed521cc4cd8eb4b1efd3fb07f0fdcce805900939d6b17c5ec"} Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.668049 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.678267 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfhrm\" (UniqueName: \"kubernetes.io/projected/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-kube-api-access-qfhrm\") pod \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\" (UID: \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\") " Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.678338 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-logs\") pod \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\" (UID: \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\") " Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.678378 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-config-data\") pod \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\" (UID: \"a3d06e6b-7f1a-4c58-be65-efff8dfe3d15\") " Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.678754 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-logs" (OuterVolumeSpecName: "logs") pod "a3d06e6b-7f1a-4c58-be65-efff8dfe3d15" (UID: "a3d06e6b-7f1a-4c58-be65-efff8dfe3d15"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.679143 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptnd5\" (UniqueName: \"kubernetes.io/projected/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-kube-api-access-ptnd5\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.679172 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.679187 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74ce4d2d-762a-40b6-99fd-b56ae540b1c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.679202 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.684198 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-kube-api-access-qfhrm" (OuterVolumeSpecName: "kube-api-access-qfhrm") pod "a3d06e6b-7f1a-4c58-be65-efff8dfe3d15" (UID: "a3d06e6b-7f1a-4c58-be65-efff8dfe3d15"). InnerVolumeSpecName "kube-api-access-qfhrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.692132 4893 scope.go:117] "RemoveContainer" containerID="3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.713759 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.715345 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-config-data" (OuterVolumeSpecName: "config-data") pod "a3d06e6b-7f1a-4c58-be65-efff8dfe3d15" (UID: "a3d06e6b-7f1a-4c58-be65-efff8dfe3d15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.723677 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.724997 4893 scope.go:117] "RemoveContainer" containerID="44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929" Jan 28 15:29:41 crc kubenswrapper[4893]: E0128 15:29:41.725430 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929\": container with ID starting with 44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929 not found: ID does not exist" containerID="44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.725458 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929"} err="failed to get container status \"44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929\": rpc error: code = NotFound desc = could not find container \"44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929\": container with ID starting with 44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929 not found: ID does not exist" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.725492 4893 scope.go:117] "RemoveContainer" containerID="3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9" Jan 28 15:29:41 crc kubenswrapper[4893]: E0128 15:29:41.725783 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9\": container with ID starting with 3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9 not found: ID does not exist" containerID="3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.725844 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9"} err="failed to get container status \"3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9\": rpc error: code = NotFound desc = could not find container \"3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9\": container with ID starting with 3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9 not found: ID does not exist" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.725858 4893 scope.go:117] "RemoveContainer" containerID="44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.726167 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929"} err="failed to get container status \"44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929\": rpc error: code = NotFound desc = could not find container \"44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929\": container with ID starting with 44d1fcb7410e1cc24c5f26cb40b99fc0b55bd5d01f912d7934ab9c165029d929 not found: ID does not exist" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.726195 4893 scope.go:117] "RemoveContainer" containerID="3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.726464 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9"} err="failed to get container status \"3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9\": rpc error: code = NotFound desc = could not find container \"3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9\": container with ID starting with 3e6cd293af714bf5ec4c835bffcc1306a695e363ae64b202b9a893340ac884e9 not found: ID does not exist" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.726498 4893 scope.go:117] "RemoveContainer" containerID="5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.740446 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:29:41 crc kubenswrapper[4893]: E0128 15:29:41.741013 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ce4d2d-762a-40b6-99fd-b56ae540b1c8" containerName="nova-kuttl-api-api" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.741033 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ce4d2d-762a-40b6-99fd-b56ae540b1c8" containerName="nova-kuttl-api-api" Jan 28 15:29:41 crc kubenswrapper[4893]: E0128 15:29:41.741057 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d06e6b-7f1a-4c58-be65-efff8dfe3d15" containerName="nova-kuttl-metadata-log" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.741066 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d06e6b-7f1a-4c58-be65-efff8dfe3d15" containerName="nova-kuttl-metadata-log" Jan 28 15:29:41 crc kubenswrapper[4893]: E0128 15:29:41.741085 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c67810-713f-40ef-a19c-f7a726b17271" containerName="nova-manage" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.741093 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c67810-713f-40ef-a19c-f7a726b17271" containerName="nova-manage" Jan 28 15:29:41 crc kubenswrapper[4893]: E0128 15:29:41.741111 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74ce4d2d-762a-40b6-99fd-b56ae540b1c8" containerName="nova-kuttl-api-log" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.741119 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="74ce4d2d-762a-40b6-99fd-b56ae540b1c8" containerName="nova-kuttl-api-log" Jan 28 15:29:41 crc kubenswrapper[4893]: E0128 15:29:41.741126 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d06e6b-7f1a-4c58-be65-efff8dfe3d15" containerName="nova-kuttl-metadata-metadata" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.741132 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d06e6b-7f1a-4c58-be65-efff8dfe3d15" containerName="nova-kuttl-metadata-metadata" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.741303 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d06e6b-7f1a-4c58-be65-efff8dfe3d15" containerName="nova-kuttl-metadata-log" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.741327 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d06e6b-7f1a-4c58-be65-efff8dfe3d15" containerName="nova-kuttl-metadata-metadata" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.741349 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="74ce4d2d-762a-40b6-99fd-b56ae540b1c8" containerName="nova-kuttl-api-log" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.741362 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="74ce4d2d-762a-40b6-99fd-b56ae540b1c8" containerName="nova-kuttl-api-api" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.741372 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c67810-713f-40ef-a19c-f7a726b17271" containerName="nova-manage" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.742547 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.745076 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.748818 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.762854 4893 scope.go:117] "RemoveContainer" containerID="9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.781404 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfhrm\" (UniqueName: \"kubernetes.io/projected/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-kube-api-access-qfhrm\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.781448 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.784560 4893 scope.go:117] "RemoveContainer" containerID="5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724" Jan 28 15:29:41 crc kubenswrapper[4893]: E0128 15:29:41.785159 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724\": container with ID starting with 5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724 not found: ID does not exist" containerID="5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.785225 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724"} err="failed to get container status \"5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724\": rpc error: code = NotFound desc = could not find container \"5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724\": container with ID starting with 5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724 not found: ID does not exist" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.785262 4893 scope.go:117] "RemoveContainer" containerID="9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5" Jan 28 15:29:41 crc kubenswrapper[4893]: E0128 15:29:41.785760 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5\": container with ID starting with 9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5 not found: ID does not exist" containerID="9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.785792 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5"} err="failed to get container status \"9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5\": rpc error: code = NotFound desc = could not find container \"9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5\": container with ID starting with 9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5 not found: ID does not exist" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.785806 4893 scope.go:117] "RemoveContainer" containerID="5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.786255 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724"} err="failed to get container status \"5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724\": rpc error: code = NotFound desc = could not find container \"5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724\": container with ID starting with 5b6c48f1621a4b20b06acff0b708ff2fd98e6d505a762788cf2da352b386d724 not found: ID does not exist" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.786275 4893 scope.go:117] "RemoveContainer" containerID="9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.786595 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5"} err="failed to get container status \"9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5\": rpc error: code = NotFound desc = could not find container \"9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5\": container with ID starting with 9e7b4810af893fa5ff175a7856bccea8028a0699347f70a2394ce45fd592d5b5 not found: ID does not exist" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.882521 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be84d443-b5a7-4cf3-bd18-ae0137323662-config-data\") pod \"nova-kuttl-api-0\" (UID: \"be84d443-b5a7-4cf3-bd18-ae0137323662\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.882605 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stmtc\" (UniqueName: \"kubernetes.io/projected/be84d443-b5a7-4cf3-bd18-ae0137323662-kube-api-access-stmtc\") pod \"nova-kuttl-api-0\" (UID: \"be84d443-b5a7-4cf3-bd18-ae0137323662\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.882714 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be84d443-b5a7-4cf3-bd18-ae0137323662-logs\") pod \"nova-kuttl-api-0\" (UID: \"be84d443-b5a7-4cf3-bd18-ae0137323662\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.983578 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be84d443-b5a7-4cf3-bd18-ae0137323662-logs\") pod \"nova-kuttl-api-0\" (UID: \"be84d443-b5a7-4cf3-bd18-ae0137323662\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.983662 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be84d443-b5a7-4cf3-bd18-ae0137323662-config-data\") pod \"nova-kuttl-api-0\" (UID: \"be84d443-b5a7-4cf3-bd18-ae0137323662\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.983760 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stmtc\" (UniqueName: \"kubernetes.io/projected/be84d443-b5a7-4cf3-bd18-ae0137323662-kube-api-access-stmtc\") pod \"nova-kuttl-api-0\" (UID: \"be84d443-b5a7-4cf3-bd18-ae0137323662\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.985400 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be84d443-b5a7-4cf3-bd18-ae0137323662-logs\") pod \"nova-kuttl-api-0\" (UID: \"be84d443-b5a7-4cf3-bd18-ae0137323662\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.988538 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be84d443-b5a7-4cf3-bd18-ae0137323662-config-data\") pod \"nova-kuttl-api-0\" (UID: \"be84d443-b5a7-4cf3-bd18-ae0137323662\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:41 crc kubenswrapper[4893]: I0128 15:29:41.998186 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.000584 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stmtc\" (UniqueName: \"kubernetes.io/projected/be84d443-b5a7-4cf3-bd18-ae0137323662-kube-api-access-stmtc\") pod \"nova-kuttl-api-0\" (UID: \"be84d443-b5a7-4cf3-bd18-ae0137323662\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.006543 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.020873 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.022214 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.024883 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.042677 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.043176 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.061467 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.071146 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.195101 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6785ef1e-5034-4cd3-adee-1de1fb62373d-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6785ef1e-5034-4cd3-adee-1de1fb62373d\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.195265 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6785ef1e-5034-4cd3-adee-1de1fb62373d-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6785ef1e-5034-4cd3-adee-1de1fb62373d\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.195297 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9gs\" (UniqueName: \"kubernetes.io/projected/6785ef1e-5034-4cd3-adee-1de1fb62373d-kube-api-access-5x9gs\") pod \"nova-kuttl-metadata-0\" (UID: \"6785ef1e-5034-4cd3-adee-1de1fb62373d\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.296693 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6785ef1e-5034-4cd3-adee-1de1fb62373d-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6785ef1e-5034-4cd3-adee-1de1fb62373d\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.296779 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9gs\" (UniqueName: \"kubernetes.io/projected/6785ef1e-5034-4cd3-adee-1de1fb62373d-kube-api-access-5x9gs\") pod \"nova-kuttl-metadata-0\" (UID: \"6785ef1e-5034-4cd3-adee-1de1fb62373d\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.296845 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6785ef1e-5034-4cd3-adee-1de1fb62373d-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6785ef1e-5034-4cd3-adee-1de1fb62373d\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.297425 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6785ef1e-5034-4cd3-adee-1de1fb62373d-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"6785ef1e-5034-4cd3-adee-1de1fb62373d\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.308870 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6785ef1e-5034-4cd3-adee-1de1fb62373d-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"6785ef1e-5034-4cd3-adee-1de1fb62373d\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.320692 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9gs\" (UniqueName: \"kubernetes.io/projected/6785ef1e-5034-4cd3-adee-1de1fb62373d-kube-api-access-5x9gs\") pod \"nova-kuttl-metadata-0\" (UID: \"6785ef1e-5034-4cd3-adee-1de1fb62373d\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.341374 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.553739 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:29:42 crc kubenswrapper[4893]: W0128 15:29:42.558951 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe84d443_b5a7_4cf3_bd18_ae0137323662.slice/crio-ff530e56c6f1e5ba35a20d7a59b5a9c1f9f4f8fa8a42104ff07963981210e784 WatchSource:0}: Error finding container ff530e56c6f1e5ba35a20d7a59b5a9c1f9f4f8fa8a42104ff07963981210e784: Status 404 returned error can't find the container with id ff530e56c6f1e5ba35a20d7a59b5a9c1f9f4f8fa8a42104ff07963981210e784 Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.679305 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"be84d443-b5a7-4cf3-bd18-ae0137323662","Type":"ContainerStarted","Data":"ff530e56c6f1e5ba35a20d7a59b5a9c1f9f4f8fa8a42104ff07963981210e784"} Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.690773 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.765734 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:29:42 crc kubenswrapper[4893]: W0128 15:29:42.777829 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6785ef1e_5034_4cd3_adee_1de1fb62373d.slice/crio-1e3a0943a07fa4b51ddba70e2f9ae3c87bc7168b52aea7c7f0dc02513e02060c WatchSource:0}: Error finding container 1e3a0943a07fa4b51ddba70e2f9ae3c87bc7168b52aea7c7f0dc02513e02060c: Status 404 returned error can't find the container with id 1e3a0943a07fa4b51ddba70e2f9ae3c87bc7168b52aea7c7f0dc02513e02060c Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.913391 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74ce4d2d-762a-40b6-99fd-b56ae540b1c8" path="/var/lib/kubelet/pods/74ce4d2d-762a-40b6-99fd-b56ae540b1c8/volumes" Jan 28 15:29:42 crc kubenswrapper[4893]: I0128 15:29:42.914039 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d06e6b-7f1a-4c58-be65-efff8dfe3d15" path="/var/lib/kubelet/pods/a3d06e6b-7f1a-4c58-be65-efff8dfe3d15/volumes" Jan 28 15:29:43 crc kubenswrapper[4893]: I0128 15:29:43.689227 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"be84d443-b5a7-4cf3-bd18-ae0137323662","Type":"ContainerStarted","Data":"bbc4b1c288bf15054e99c20f7ae1734f2242e5acaaeb882d531cb4fe1609ac6f"} Jan 28 15:29:43 crc kubenswrapper[4893]: I0128 15:29:43.689599 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"be84d443-b5a7-4cf3-bd18-ae0137323662","Type":"ContainerStarted","Data":"cfe6dfa8de3ae29417dfaa66b912c204dce580ec475907cdc939013da7aa697b"} Jan 28 15:29:43 crc kubenswrapper[4893]: I0128 15:29:43.691153 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6785ef1e-5034-4cd3-adee-1de1fb62373d","Type":"ContainerStarted","Data":"a89f1261d643b698950f364b77c4ff9b98f29b93aa79b04c9e0fab5606608cbe"} Jan 28 15:29:43 crc kubenswrapper[4893]: I0128 15:29:43.691265 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6785ef1e-5034-4cd3-adee-1de1fb62373d","Type":"ContainerStarted","Data":"d0087446e830f1e22ea71dfd835f11afe23acf1b181cc25976cb0892fef7073d"} Jan 28 15:29:43 crc kubenswrapper[4893]: I0128 15:29:43.691327 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6785ef1e-5034-4cd3-adee-1de1fb62373d","Type":"ContainerStarted","Data":"1e3a0943a07fa4b51ddba70e2f9ae3c87bc7168b52aea7c7f0dc02513e02060c"} Jan 28 15:29:43 crc kubenswrapper[4893]: I0128 15:29:43.709251 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.709232225 podStartE2EDuration="2.709232225s" podCreationTimestamp="2026-01-28 15:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:29:43.706551792 +0000 UTC m=+1701.480166820" watchObservedRunningTime="2026-01-28 15:29:43.709232225 +0000 UTC m=+1701.482847253" Jan 28 15:29:43 crc kubenswrapper[4893]: I0128 15:29:43.735204 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=1.735184389 podStartE2EDuration="1.735184389s" podCreationTimestamp="2026-01-28 15:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:29:43.724938621 +0000 UTC m=+1701.498553669" watchObservedRunningTime="2026-01-28 15:29:43.735184389 +0000 UTC m=+1701.508799417" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.049937 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.648408 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m"] Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.650182 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.659627 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.659867 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.662353 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m"] Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.706568 4893 generic.go:334] "Generic (PLEG): container finished" podID="80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d" containerID="7afe546e59039d9e982b04cd67336483a8a4ad4c1af2b8c09d04c94f208aa244" exitCode=0 Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.707290 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d","Type":"ContainerDied","Data":"7afe546e59039d9e982b04cd67336483a8a4ad4c1af2b8c09d04c94f208aa244"} Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.707313 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d","Type":"ContainerDied","Data":"973c0e658f4c4c483716efb32eeb8160557605d65ac22572db86fb4cff25d19e"} Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.707325 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="973c0e658f4c4c483716efb32eeb8160557605d65ac22572db86fb4cff25d19e" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.729018 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.734912 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7bcd70-52c5-4df8-8c09-28881a2fa384-config-data\") pod \"nova-kuttl-cell1-cell-mapping-9m22m\" (UID: \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.735020 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c7bcd70-52c5-4df8-8c09-28881a2fa384-scripts\") pod \"nova-kuttl-cell1-cell-mapping-9m22m\" (UID: \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.735070 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxwqr\" (UniqueName: \"kubernetes.io/projected/5c7bcd70-52c5-4df8-8c09-28881a2fa384-kube-api-access-hxwqr\") pod \"nova-kuttl-cell1-cell-mapping-9m22m\" (UID: \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.835711 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87tpz\" (UniqueName: \"kubernetes.io/projected/80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d-kube-api-access-87tpz\") pod \"80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d\" (UID: \"80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d\") " Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.835924 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d-config-data\") pod \"80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d\" (UID: \"80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d\") " Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.836383 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7bcd70-52c5-4df8-8c09-28881a2fa384-config-data\") pod \"nova-kuttl-cell1-cell-mapping-9m22m\" (UID: \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.837143 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c7bcd70-52c5-4df8-8c09-28881a2fa384-scripts\") pod \"nova-kuttl-cell1-cell-mapping-9m22m\" (UID: \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.837211 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxwqr\" (UniqueName: \"kubernetes.io/projected/5c7bcd70-52c5-4df8-8c09-28881a2fa384-kube-api-access-hxwqr\") pod \"nova-kuttl-cell1-cell-mapping-9m22m\" (UID: \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.842876 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d-kube-api-access-87tpz" (OuterVolumeSpecName: "kube-api-access-87tpz") pod "80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d" (UID: "80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d"). InnerVolumeSpecName "kube-api-access-87tpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.846639 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7bcd70-52c5-4df8-8c09-28881a2fa384-config-data\") pod \"nova-kuttl-cell1-cell-mapping-9m22m\" (UID: \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.847109 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c7bcd70-52c5-4df8-8c09-28881a2fa384-scripts\") pod \"nova-kuttl-cell1-cell-mapping-9m22m\" (UID: \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.874367 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxwqr\" (UniqueName: \"kubernetes.io/projected/5c7bcd70-52c5-4df8-8c09-28881a2fa384-kube-api-access-hxwqr\") pod \"nova-kuttl-cell1-cell-mapping-9m22m\" (UID: \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.881426 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d-config-data" (OuterVolumeSpecName: "config-data") pod "80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d" (UID: "80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.952942 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:44 crc kubenswrapper[4893]: I0128 15:29:44.952988 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87tpz\" (UniqueName: \"kubernetes.io/projected/80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d-kube-api-access-87tpz\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.043249 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.488702 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m"] Jan 28 15:29:45 crc kubenswrapper[4893]: W0128 15:29:45.499429 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c7bcd70_52c5_4df8_8c09_28881a2fa384.slice/crio-f8ce97001741b5e9e5d1ee4baa58973742f3047763c434bb20d20fa78487afe1 WatchSource:0}: Error finding container f8ce97001741b5e9e5d1ee4baa58973742f3047763c434bb20d20fa78487afe1: Status 404 returned error can't find the container with id f8ce97001741b5e9e5d1ee4baa58973742f3047763c434bb20d20fa78487afe1 Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.717013 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" event={"ID":"5c7bcd70-52c5-4df8-8c09-28881a2fa384","Type":"ContainerStarted","Data":"f8ce97001741b5e9e5d1ee4baa58973742f3047763c434bb20d20fa78487afe1"} Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.717060 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.737728 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.746183 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.759405 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:29:45 crc kubenswrapper[4893]: E0128 15:29:45.759934 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.760140 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.760340 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.761161 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.763017 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.768451 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.869317 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fjpn\" (UniqueName: \"kubernetes.io/projected/018adc90-a685-4a65-b07b-521f37578e5e-kube-api-access-7fjpn\") pod \"nova-kuttl-scheduler-0\" (UID: \"018adc90-a685-4a65-b07b-521f37578e5e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.869544 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018adc90-a685-4a65-b07b-521f37578e5e-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"018adc90-a685-4a65-b07b-521f37578e5e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.891992 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:29:45 crc kubenswrapper[4893]: E0128 15:29:45.892230 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.971191 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018adc90-a685-4a65-b07b-521f37578e5e-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"018adc90-a685-4a65-b07b-521f37578e5e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.971255 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fjpn\" (UniqueName: \"kubernetes.io/projected/018adc90-a685-4a65-b07b-521f37578e5e-kube-api-access-7fjpn\") pod \"nova-kuttl-scheduler-0\" (UID: \"018adc90-a685-4a65-b07b-521f37578e5e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.980265 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018adc90-a685-4a65-b07b-521f37578e5e-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"018adc90-a685-4a65-b07b-521f37578e5e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:45 crc kubenswrapper[4893]: I0128 15:29:45.989228 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fjpn\" (UniqueName: \"kubernetes.io/projected/018adc90-a685-4a65-b07b-521f37578e5e-kube-api-access-7fjpn\") pod \"nova-kuttl-scheduler-0\" (UID: \"018adc90-a685-4a65-b07b-521f37578e5e\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:46 crc kubenswrapper[4893]: I0128 15:29:46.111514 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:46 crc kubenswrapper[4893]: W0128 15:29:46.567587 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod018adc90_a685_4a65_b07b_521f37578e5e.slice/crio-a711271f547a4e3225b2134cdcab976304917f54fce6c22e34b610b4bde8759e WatchSource:0}: Error finding container a711271f547a4e3225b2134cdcab976304917f54fce6c22e34b610b4bde8759e: Status 404 returned error can't find the container with id a711271f547a4e3225b2134cdcab976304917f54fce6c22e34b610b4bde8759e Jan 28 15:29:46 crc kubenswrapper[4893]: I0128 15:29:46.568006 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:29:46 crc kubenswrapper[4893]: I0128 15:29:46.727959 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" event={"ID":"5c7bcd70-52c5-4df8-8c09-28881a2fa384","Type":"ContainerStarted","Data":"a8a6c463fe31f9d95e10ee96f624accda2ac3466897c2010c5ed48f0ea494aa4"} Jan 28 15:29:46 crc kubenswrapper[4893]: I0128 15:29:46.729530 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"018adc90-a685-4a65-b07b-521f37578e5e","Type":"ContainerStarted","Data":"a711271f547a4e3225b2134cdcab976304917f54fce6c22e34b610b4bde8759e"} Jan 28 15:29:46 crc kubenswrapper[4893]: I0128 15:29:46.748234 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" podStartSLOduration=2.7482157149999997 podStartE2EDuration="2.748215715s" podCreationTimestamp="2026-01-28 15:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:29:46.743834207 +0000 UTC m=+1704.517449235" watchObservedRunningTime="2026-01-28 15:29:46.748215715 +0000 UTC m=+1704.521830743" Jan 28 15:29:46 crc kubenswrapper[4893]: I0128 15:29:46.903008 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d" path="/var/lib/kubelet/pods/80a4ed1a-6d11-4cdd-b2bb-23cd7b1e492d/volumes" Jan 28 15:29:47 crc kubenswrapper[4893]: I0128 15:29:47.342232 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:47 crc kubenswrapper[4893]: I0128 15:29:47.342597 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:47 crc kubenswrapper[4893]: I0128 15:29:47.740229 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"018adc90-a685-4a65-b07b-521f37578e5e","Type":"ContainerStarted","Data":"17f1856a1cab1c8c7c0ea08d1e1d0f378fa24ea2f8ebfdd48be6ce9bc2771e3f"} Jan 28 15:29:47 crc kubenswrapper[4893]: I0128 15:29:47.771051 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.771025938 podStartE2EDuration="2.771025938s" podCreationTimestamp="2026-01-28 15:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:29:47.760919195 +0000 UTC m=+1705.534534233" watchObservedRunningTime="2026-01-28 15:29:47.771025938 +0000 UTC m=+1705.544640956" Jan 28 15:29:51 crc kubenswrapper[4893]: I0128 15:29:51.112463 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:51 crc kubenswrapper[4893]: I0128 15:29:51.778581 4893 generic.go:334] "Generic (PLEG): container finished" podID="5c7bcd70-52c5-4df8-8c09-28881a2fa384" containerID="a8a6c463fe31f9d95e10ee96f624accda2ac3466897c2010c5ed48f0ea494aa4" exitCode=0 Jan 28 15:29:51 crc kubenswrapper[4893]: I0128 15:29:51.778664 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" event={"ID":"5c7bcd70-52c5-4df8-8c09-28881a2fa384","Type":"ContainerDied","Data":"a8a6c463fe31f9d95e10ee96f624accda2ac3466897c2010c5ed48f0ea494aa4"} Jan 28 15:29:52 crc kubenswrapper[4893]: I0128 15:29:52.072030 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:52 crc kubenswrapper[4893]: I0128 15:29:52.072113 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:52 crc kubenswrapper[4893]: I0128 15:29:52.342459 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:52 crc kubenswrapper[4893]: I0128 15:29:52.342523 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.166786 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="be84d443-b5a7-4cf3-bd18-ae0137323662" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.159:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.166842 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="be84d443-b5a7-4cf3-bd18-ae0137323662" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.159:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.226358 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.251170 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7bcd70-52c5-4df8-8c09-28881a2fa384-config-data\") pod \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\" (UID: \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\") " Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.251259 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxwqr\" (UniqueName: \"kubernetes.io/projected/5c7bcd70-52c5-4df8-8c09-28881a2fa384-kube-api-access-hxwqr\") pod \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\" (UID: \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\") " Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.251343 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c7bcd70-52c5-4df8-8c09-28881a2fa384-scripts\") pod \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\" (UID: \"5c7bcd70-52c5-4df8-8c09-28881a2fa384\") " Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.283966 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7bcd70-52c5-4df8-8c09-28881a2fa384-kube-api-access-hxwqr" (OuterVolumeSpecName: "kube-api-access-hxwqr") pod "5c7bcd70-52c5-4df8-8c09-28881a2fa384" (UID: "5c7bcd70-52c5-4df8-8c09-28881a2fa384"). InnerVolumeSpecName "kube-api-access-hxwqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.307866 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7bcd70-52c5-4df8-8c09-28881a2fa384-scripts" (OuterVolumeSpecName: "scripts") pod "5c7bcd70-52c5-4df8-8c09-28881a2fa384" (UID: "5c7bcd70-52c5-4df8-8c09-28881a2fa384"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.338317 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7bcd70-52c5-4df8-8c09-28881a2fa384-config-data" (OuterVolumeSpecName: "config-data") pod "5c7bcd70-52c5-4df8-8c09-28881a2fa384" (UID: "5c7bcd70-52c5-4df8-8c09-28881a2fa384"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.353872 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7bcd70-52c5-4df8-8c09-28881a2fa384-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.353920 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxwqr\" (UniqueName: \"kubernetes.io/projected/5c7bcd70-52c5-4df8-8c09-28881a2fa384-kube-api-access-hxwqr\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.353934 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c7bcd70-52c5-4df8-8c09-28881a2fa384-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.425835 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6785ef1e-5034-4cd3-adee-1de1fb62373d" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.160:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.426277 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6785ef1e-5034-4cd3-adee-1de1fb62373d" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.160:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.797033 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" event={"ID":"5c7bcd70-52c5-4df8-8c09-28881a2fa384","Type":"ContainerDied","Data":"f8ce97001741b5e9e5d1ee4baa58973742f3047763c434bb20d20fa78487afe1"} Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.797091 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8ce97001741b5e9e5d1ee4baa58973742f3047763c434bb20d20fa78487afe1" Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.797187 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m" Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.995781 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.996756 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="be84d443-b5a7-4cf3-bd18-ae0137323662" containerName="nova-kuttl-api-log" containerID="cri-o://cfe6dfa8de3ae29417dfaa66b912c204dce580ec475907cdc939013da7aa697b" gracePeriod=30 Jan 28 15:29:53 crc kubenswrapper[4893]: I0128 15:29:53.997531 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="be84d443-b5a7-4cf3-bd18-ae0137323662" containerName="nova-kuttl-api-api" containerID="cri-o://bbc4b1c288bf15054e99c20f7ae1734f2242e5acaaeb882d531cb4fe1609ac6f" gracePeriod=30 Jan 28 15:29:54 crc kubenswrapper[4893]: I0128 15:29:54.019667 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:29:54 crc kubenswrapper[4893]: I0128 15:29:54.030407 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="018adc90-a685-4a65-b07b-521f37578e5e" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://17f1856a1cab1c8c7c0ea08d1e1d0f378fa24ea2f8ebfdd48be6ce9bc2771e3f" gracePeriod=30 Jan 28 15:29:54 crc kubenswrapper[4893]: I0128 15:29:54.079123 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:29:54 crc kubenswrapper[4893]: I0128 15:29:54.079447 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6785ef1e-5034-4cd3-adee-1de1fb62373d" containerName="nova-kuttl-metadata-log" containerID="cri-o://d0087446e830f1e22ea71dfd835f11afe23acf1b181cc25976cb0892fef7073d" gracePeriod=30 Jan 28 15:29:54 crc kubenswrapper[4893]: I0128 15:29:54.079637 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="6785ef1e-5034-4cd3-adee-1de1fb62373d" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://a89f1261d643b698950f364b77c4ff9b98f29b93aa79b04c9e0fab5606608cbe" gracePeriod=30 Jan 28 15:29:54 crc kubenswrapper[4893]: I0128 15:29:54.809413 4893 generic.go:334] "Generic (PLEG): container finished" podID="6785ef1e-5034-4cd3-adee-1de1fb62373d" containerID="d0087446e830f1e22ea71dfd835f11afe23acf1b181cc25976cb0892fef7073d" exitCode=143 Jan 28 15:29:54 crc kubenswrapper[4893]: I0128 15:29:54.809548 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6785ef1e-5034-4cd3-adee-1de1fb62373d","Type":"ContainerDied","Data":"d0087446e830f1e22ea71dfd835f11afe23acf1b181cc25976cb0892fef7073d"} Jan 28 15:29:54 crc kubenswrapper[4893]: I0128 15:29:54.814929 4893 generic.go:334] "Generic (PLEG): container finished" podID="be84d443-b5a7-4cf3-bd18-ae0137323662" containerID="cfe6dfa8de3ae29417dfaa66b912c204dce580ec475907cdc939013da7aa697b" exitCode=143 Jan 28 15:29:54 crc kubenswrapper[4893]: I0128 15:29:54.814976 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"be84d443-b5a7-4cf3-bd18-ae0137323662","Type":"ContainerDied","Data":"cfe6dfa8de3ae29417dfaa66b912c204dce580ec475907cdc939013da7aa697b"} Jan 28 15:29:57 crc kubenswrapper[4893]: I0128 15:29:57.892622 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:29:57 crc kubenswrapper[4893]: E0128 15:29:57.893173 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:29:58 crc kubenswrapper[4893]: I0128 15:29:58.853125 4893 generic.go:334] "Generic (PLEG): container finished" podID="be84d443-b5a7-4cf3-bd18-ae0137323662" containerID="bbc4b1c288bf15054e99c20f7ae1734f2242e5acaaeb882d531cb4fe1609ac6f" exitCode=0 Jan 28 15:29:58 crc kubenswrapper[4893]: I0128 15:29:58.853659 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"be84d443-b5a7-4cf3-bd18-ae0137323662","Type":"ContainerDied","Data":"bbc4b1c288bf15054e99c20f7ae1734f2242e5acaaeb882d531cb4fe1609ac6f"} Jan 28 15:29:58 crc kubenswrapper[4893]: I0128 15:29:58.855604 4893 generic.go:334] "Generic (PLEG): container finished" podID="018adc90-a685-4a65-b07b-521f37578e5e" containerID="17f1856a1cab1c8c7c0ea08d1e1d0f378fa24ea2f8ebfdd48be6ce9bc2771e3f" exitCode=0 Jan 28 15:29:58 crc kubenswrapper[4893]: I0128 15:29:58.855690 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"018adc90-a685-4a65-b07b-521f37578e5e","Type":"ContainerDied","Data":"17f1856a1cab1c8c7c0ea08d1e1d0f378fa24ea2f8ebfdd48be6ce9bc2771e3f"} Jan 28 15:29:58 crc kubenswrapper[4893]: I0128 15:29:58.855728 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"018adc90-a685-4a65-b07b-521f37578e5e","Type":"ContainerDied","Data":"a711271f547a4e3225b2134cdcab976304917f54fce6c22e34b610b4bde8759e"} Jan 28 15:29:58 crc kubenswrapper[4893]: I0128 15:29:58.855742 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a711271f547a4e3225b2134cdcab976304917f54fce6c22e34b610b4bde8759e" Jan 28 15:29:58 crc kubenswrapper[4893]: I0128 15:29:58.857936 4893 generic.go:334] "Generic (PLEG): container finished" podID="6785ef1e-5034-4cd3-adee-1de1fb62373d" containerID="a89f1261d643b698950f364b77c4ff9b98f29b93aa79b04c9e0fab5606608cbe" exitCode=0 Jan 28 15:29:58 crc kubenswrapper[4893]: I0128 15:29:58.858000 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6785ef1e-5034-4cd3-adee-1de1fb62373d","Type":"ContainerDied","Data":"a89f1261d643b698950f364b77c4ff9b98f29b93aa79b04c9e0fab5606608cbe"} Jan 28 15:29:58 crc kubenswrapper[4893]: I0128 15:29:58.902025 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:58 crc kubenswrapper[4893]: I0128 15:29:58.961616 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018adc90-a685-4a65-b07b-521f37578e5e-config-data\") pod \"018adc90-a685-4a65-b07b-521f37578e5e\" (UID: \"018adc90-a685-4a65-b07b-521f37578e5e\") " Jan 28 15:29:58 crc kubenswrapper[4893]: I0128 15:29:58.961804 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fjpn\" (UniqueName: \"kubernetes.io/projected/018adc90-a685-4a65-b07b-521f37578e5e-kube-api-access-7fjpn\") pod \"018adc90-a685-4a65-b07b-521f37578e5e\" (UID: \"018adc90-a685-4a65-b07b-521f37578e5e\") " Jan 28 15:29:58 crc kubenswrapper[4893]: I0128 15:29:58.970081 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018adc90-a685-4a65-b07b-521f37578e5e-kube-api-access-7fjpn" (OuterVolumeSpecName: "kube-api-access-7fjpn") pod "018adc90-a685-4a65-b07b-521f37578e5e" (UID: "018adc90-a685-4a65-b07b-521f37578e5e"). InnerVolumeSpecName "kube-api-access-7fjpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:58 crc kubenswrapper[4893]: I0128 15:29:58.998244 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.008333 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018adc90-a685-4a65-b07b-521f37578e5e-config-data" (OuterVolumeSpecName: "config-data") pod "018adc90-a685-4a65-b07b-521f37578e5e" (UID: "018adc90-a685-4a65-b07b-521f37578e5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.021225 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.063614 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6785ef1e-5034-4cd3-adee-1de1fb62373d-logs\") pod \"6785ef1e-5034-4cd3-adee-1de1fb62373d\" (UID: \"6785ef1e-5034-4cd3-adee-1de1fb62373d\") " Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.063727 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6785ef1e-5034-4cd3-adee-1de1fb62373d-config-data\") pod \"6785ef1e-5034-4cd3-adee-1de1fb62373d\" (UID: \"6785ef1e-5034-4cd3-adee-1de1fb62373d\") " Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.064433 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x9gs\" (UniqueName: \"kubernetes.io/projected/6785ef1e-5034-4cd3-adee-1de1fb62373d-kube-api-access-5x9gs\") pod \"6785ef1e-5034-4cd3-adee-1de1fb62373d\" (UID: \"6785ef1e-5034-4cd3-adee-1de1fb62373d\") " Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.064502 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be84d443-b5a7-4cf3-bd18-ae0137323662-logs\") pod \"be84d443-b5a7-4cf3-bd18-ae0137323662\" (UID: \"be84d443-b5a7-4cf3-bd18-ae0137323662\") " Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.064578 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stmtc\" (UniqueName: \"kubernetes.io/projected/be84d443-b5a7-4cf3-bd18-ae0137323662-kube-api-access-stmtc\") pod \"be84d443-b5a7-4cf3-bd18-ae0137323662\" (UID: \"be84d443-b5a7-4cf3-bd18-ae0137323662\") " Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.064609 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be84d443-b5a7-4cf3-bd18-ae0137323662-config-data\") pod \"be84d443-b5a7-4cf3-bd18-ae0137323662\" (UID: \"be84d443-b5a7-4cf3-bd18-ae0137323662\") " Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.065012 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be84d443-b5a7-4cf3-bd18-ae0137323662-logs" (OuterVolumeSpecName: "logs") pod "be84d443-b5a7-4cf3-bd18-ae0137323662" (UID: "be84d443-b5a7-4cf3-bd18-ae0137323662"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.065362 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6785ef1e-5034-4cd3-adee-1de1fb62373d-logs" (OuterVolumeSpecName: "logs") pod "6785ef1e-5034-4cd3-adee-1de1fb62373d" (UID: "6785ef1e-5034-4cd3-adee-1de1fb62373d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.069002 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be84d443-b5a7-4cf3-bd18-ae0137323662-kube-api-access-stmtc" (OuterVolumeSpecName: "kube-api-access-stmtc") pod "be84d443-b5a7-4cf3-bd18-ae0137323662" (UID: "be84d443-b5a7-4cf3-bd18-ae0137323662"). InnerVolumeSpecName "kube-api-access-stmtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.073153 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fjpn\" (UniqueName: \"kubernetes.io/projected/018adc90-a685-4a65-b07b-521f37578e5e-kube-api-access-7fjpn\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.073227 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be84d443-b5a7-4cf3-bd18-ae0137323662-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.073241 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stmtc\" (UniqueName: \"kubernetes.io/projected/be84d443-b5a7-4cf3-bd18-ae0137323662-kube-api-access-stmtc\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.073251 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018adc90-a685-4a65-b07b-521f37578e5e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.073280 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6785ef1e-5034-4cd3-adee-1de1fb62373d-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.075368 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6785ef1e-5034-4cd3-adee-1de1fb62373d-kube-api-access-5x9gs" (OuterVolumeSpecName: "kube-api-access-5x9gs") pod "6785ef1e-5034-4cd3-adee-1de1fb62373d" (UID: "6785ef1e-5034-4cd3-adee-1de1fb62373d"). InnerVolumeSpecName "kube-api-access-5x9gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.088389 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6785ef1e-5034-4cd3-adee-1de1fb62373d-config-data" (OuterVolumeSpecName: "config-data") pod "6785ef1e-5034-4cd3-adee-1de1fb62373d" (UID: "6785ef1e-5034-4cd3-adee-1de1fb62373d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.092731 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be84d443-b5a7-4cf3-bd18-ae0137323662-config-data" (OuterVolumeSpecName: "config-data") pod "be84d443-b5a7-4cf3-bd18-ae0137323662" (UID: "be84d443-b5a7-4cf3-bd18-ae0137323662"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.175054 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x9gs\" (UniqueName: \"kubernetes.io/projected/6785ef1e-5034-4cd3-adee-1de1fb62373d-kube-api-access-5x9gs\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.175108 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be84d443-b5a7-4cf3-bd18-ae0137323662-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.175123 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6785ef1e-5034-4cd3-adee-1de1fb62373d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.869268 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"be84d443-b5a7-4cf3-bd18-ae0137323662","Type":"ContainerDied","Data":"ff530e56c6f1e5ba35a20d7a59b5a9c1f9f4f8fa8a42104ff07963981210e784"} Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.869288 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.869333 4893 scope.go:117] "RemoveContainer" containerID="bbc4b1c288bf15054e99c20f7ae1734f2242e5acaaeb882d531cb4fe1609ac6f" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.872687 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.872710 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.872783 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"6785ef1e-5034-4cd3-adee-1de1fb62373d","Type":"ContainerDied","Data":"1e3a0943a07fa4b51ddba70e2f9ae3c87bc7168b52aea7c7f0dc02513e02060c"} Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.895179 4893 scope.go:117] "RemoveContainer" containerID="cfe6dfa8de3ae29417dfaa66b912c204dce580ec475907cdc939013da7aa697b" Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.919888 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:29:59 crc kubenswrapper[4893]: I0128 15:29:59.936557 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.007027 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:30:00 crc kubenswrapper[4893]: E0128 15:30:00.007422 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6785ef1e-5034-4cd3-adee-1de1fb62373d" containerName="nova-kuttl-metadata-metadata" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.007435 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6785ef1e-5034-4cd3-adee-1de1fb62373d" containerName="nova-kuttl-metadata-metadata" Jan 28 15:30:00 crc kubenswrapper[4893]: E0128 15:30:00.007462 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018adc90-a685-4a65-b07b-521f37578e5e" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.007472 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="018adc90-a685-4a65-b07b-521f37578e5e" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:30:00 crc kubenswrapper[4893]: E0128 15:30:00.007514 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be84d443-b5a7-4cf3-bd18-ae0137323662" containerName="nova-kuttl-api-log" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.007523 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="be84d443-b5a7-4cf3-bd18-ae0137323662" containerName="nova-kuttl-api-log" Jan 28 15:30:00 crc kubenswrapper[4893]: E0128 15:30:00.007535 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7bcd70-52c5-4df8-8c09-28881a2fa384" containerName="nova-manage" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.007542 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7bcd70-52c5-4df8-8c09-28881a2fa384" containerName="nova-manage" Jan 28 15:30:00 crc kubenswrapper[4893]: E0128 15:30:00.007556 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be84d443-b5a7-4cf3-bd18-ae0137323662" containerName="nova-kuttl-api-api" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.007563 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="be84d443-b5a7-4cf3-bd18-ae0137323662" containerName="nova-kuttl-api-api" Jan 28 15:30:00 crc kubenswrapper[4893]: E0128 15:30:00.007573 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6785ef1e-5034-4cd3-adee-1de1fb62373d" containerName="nova-kuttl-metadata-log" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.007582 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6785ef1e-5034-4cd3-adee-1de1fb62373d" containerName="nova-kuttl-metadata-log" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.007748 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6785ef1e-5034-4cd3-adee-1de1fb62373d" containerName="nova-kuttl-metadata-metadata" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.007758 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6785ef1e-5034-4cd3-adee-1de1fb62373d" containerName="nova-kuttl-metadata-log" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.007771 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="be84d443-b5a7-4cf3-bd18-ae0137323662" containerName="nova-kuttl-api-api" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.007780 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7bcd70-52c5-4df8-8c09-28881a2fa384" containerName="nova-manage" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.007787 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="018adc90-a685-4a65-b07b-521f37578e5e" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.007797 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="be84d443-b5a7-4cf3-bd18-ae0137323662" containerName="nova-kuttl-api-log" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.008684 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.010138 4893 scope.go:117] "RemoveContainer" containerID="a89f1261d643b698950f364b77c4ff9b98f29b93aa79b04c9e0fab5606608cbe" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.011215 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.038605 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.051386 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.075630 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.079766 4893 scope.go:117] "RemoveContainer" containerID="d0087446e830f1e22ea71dfd835f11afe23acf1b181cc25976cb0892fef7073d" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.088962 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.094755 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0987bc27-2528-4462-bcd5-0941ec12bef4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"0987bc27-2528-4462-bcd5-0941ec12bef4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.094900 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqvcd\" (UniqueName: \"kubernetes.io/projected/0987bc27-2528-4462-bcd5-0941ec12bef4-kube-api-access-kqvcd\") pod \"nova-kuttl-api-0\" (UID: \"0987bc27-2528-4462-bcd5-0941ec12bef4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.094952 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0987bc27-2528-4462-bcd5-0941ec12bef4-logs\") pod \"nova-kuttl-api-0\" (UID: \"0987bc27-2528-4462-bcd5-0941ec12bef4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.107124 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.108539 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.111983 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.115411 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.153176 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.196797 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0987bc27-2528-4462-bcd5-0941ec12bef4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"0987bc27-2528-4462-bcd5-0941ec12bef4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.196893 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvnpq\" (UniqueName: \"kubernetes.io/projected/32c33a83-8802-40c5-94ac-8943e8e5df5f-kube-api-access-vvnpq\") pod \"nova-kuttl-scheduler-0\" (UID: \"32c33a83-8802-40c5-94ac-8943e8e5df5f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.203053 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0987bc27-2528-4462-bcd5-0941ec12bef4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"0987bc27-2528-4462-bcd5-0941ec12bef4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.204096 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.210805 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c33a83-8802-40c5-94ac-8943e8e5df5f-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"32c33a83-8802-40c5-94ac-8943e8e5df5f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.210879 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqvcd\" (UniqueName: \"kubernetes.io/projected/0987bc27-2528-4462-bcd5-0941ec12bef4-kube-api-access-kqvcd\") pod \"nova-kuttl-api-0\" (UID: \"0987bc27-2528-4462-bcd5-0941ec12bef4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.210956 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0987bc27-2528-4462-bcd5-0941ec12bef4-logs\") pod \"nova-kuttl-api-0\" (UID: \"0987bc27-2528-4462-bcd5-0941ec12bef4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.213597 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.214719 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.214732 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0987bc27-2528-4462-bcd5-0941ec12bef4-logs\") pod \"nova-kuttl-api-0\" (UID: \"0987bc27-2528-4462-bcd5-0941ec12bef4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.219656 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.234523 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqvcd\" (UniqueName: \"kubernetes.io/projected/0987bc27-2528-4462-bcd5-0941ec12bef4-kube-api-access-kqvcd\") pod \"nova-kuttl-api-0\" (UID: \"0987bc27-2528-4462-bcd5-0941ec12bef4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.237644 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4"] Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.238965 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.241796 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.241977 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.249687 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4"] Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.312806 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9acab649-6a00-44d1-ab58-501d4059248c-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"9acab649-6a00-44d1-ab58-501d4059248c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.313192 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvnpq\" (UniqueName: \"kubernetes.io/projected/32c33a83-8802-40c5-94ac-8943e8e5df5f-kube-api-access-vvnpq\") pod \"nova-kuttl-scheduler-0\" (UID: \"32c33a83-8802-40c5-94ac-8943e8e5df5f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.313252 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acab649-6a00-44d1-ab58-501d4059248c-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"9acab649-6a00-44d1-ab58-501d4059248c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.313275 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c33a83-8802-40c5-94ac-8943e8e5df5f-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"32c33a83-8802-40c5-94ac-8943e8e5df5f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.313309 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-secret-volume\") pod \"collect-profiles-29493570-qkwd4\" (UID: \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.313337 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-config-volume\") pod \"collect-profiles-29493570-qkwd4\" (UID: \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.313363 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn8k7\" (UniqueName: \"kubernetes.io/projected/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-kube-api-access-qn8k7\") pod \"collect-profiles-29493570-qkwd4\" (UID: \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.313391 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6chsf\" (UniqueName: \"kubernetes.io/projected/9acab649-6a00-44d1-ab58-501d4059248c-kube-api-access-6chsf\") pod \"nova-kuttl-metadata-0\" (UID: \"9acab649-6a00-44d1-ab58-501d4059248c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.318772 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c33a83-8802-40c5-94ac-8943e8e5df5f-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"32c33a83-8802-40c5-94ac-8943e8e5df5f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.332368 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvnpq\" (UniqueName: \"kubernetes.io/projected/32c33a83-8802-40c5-94ac-8943e8e5df5f-kube-api-access-vvnpq\") pod \"nova-kuttl-scheduler-0\" (UID: \"32c33a83-8802-40c5-94ac-8943e8e5df5f\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.341882 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.414528 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acab649-6a00-44d1-ab58-501d4059248c-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"9acab649-6a00-44d1-ab58-501d4059248c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.414621 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-secret-volume\") pod \"collect-profiles-29493570-qkwd4\" (UID: \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.414660 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-config-volume\") pod \"collect-profiles-29493570-qkwd4\" (UID: \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.414698 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn8k7\" (UniqueName: \"kubernetes.io/projected/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-kube-api-access-qn8k7\") pod \"collect-profiles-29493570-qkwd4\" (UID: \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.414731 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6chsf\" (UniqueName: \"kubernetes.io/projected/9acab649-6a00-44d1-ab58-501d4059248c-kube-api-access-6chsf\") pod \"nova-kuttl-metadata-0\" (UID: \"9acab649-6a00-44d1-ab58-501d4059248c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.414772 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9acab649-6a00-44d1-ab58-501d4059248c-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"9acab649-6a00-44d1-ab58-501d4059248c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.415181 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9acab649-6a00-44d1-ab58-501d4059248c-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"9acab649-6a00-44d1-ab58-501d4059248c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.416817 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-config-volume\") pod \"collect-profiles-29493570-qkwd4\" (UID: \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.420414 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acab649-6a00-44d1-ab58-501d4059248c-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"9acab649-6a00-44d1-ab58-501d4059248c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.422594 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-secret-volume\") pod \"collect-profiles-29493570-qkwd4\" (UID: \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.433401 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.441594 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn8k7\" (UniqueName: \"kubernetes.io/projected/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-kube-api-access-qn8k7\") pod \"collect-profiles-29493570-qkwd4\" (UID: \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.444136 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6chsf\" (UniqueName: \"kubernetes.io/projected/9acab649-6a00-44d1-ab58-501d4059248c-kube-api-access-6chsf\") pod \"nova-kuttl-metadata-0\" (UID: \"9acab649-6a00-44d1-ab58-501d4059248c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.581219 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.590738 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.865282 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.882888 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0987bc27-2528-4462-bcd5-0941ec12bef4","Type":"ContainerStarted","Data":"a0f608c11b206a1a00f46a813a267d4b9a92cdd93593f804b5bd85ed20aaf8fc"} Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.906665 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018adc90-a685-4a65-b07b-521f37578e5e" path="/var/lib/kubelet/pods/018adc90-a685-4a65-b07b-521f37578e5e/volumes" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.907952 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6785ef1e-5034-4cd3-adee-1de1fb62373d" path="/var/lib/kubelet/pods/6785ef1e-5034-4cd3-adee-1de1fb62373d/volumes" Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.908585 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be84d443-b5a7-4cf3-bd18-ae0137323662" path="/var/lib/kubelet/pods/be84d443-b5a7-4cf3-bd18-ae0137323662/volumes" Jan 28 15:30:00 crc kubenswrapper[4893]: W0128 15:30:00.966866 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32c33a83_8802_40c5_94ac_8943e8e5df5f.slice/crio-d3e473e3c650f08b27eec5006cb6962d57b6557e62c893abeb35e8f7cf14a45c WatchSource:0}: Error finding container d3e473e3c650f08b27eec5006cb6962d57b6557e62c893abeb35e8f7cf14a45c: Status 404 returned error can't find the container with id d3e473e3c650f08b27eec5006cb6962d57b6557e62c893abeb35e8f7cf14a45c Jan 28 15:30:00 crc kubenswrapper[4893]: I0128 15:30:00.970762 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.095843 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.121008 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4"] Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.898738 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"9acab649-6a00-44d1-ab58-501d4059248c","Type":"ContainerStarted","Data":"06ab4639500e3f42c2b232dea862b31585b98071c77b693f7169db9521aeb57b"} Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.899254 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"9acab649-6a00-44d1-ab58-501d4059248c","Type":"ContainerStarted","Data":"5dd39be6f8ea08cb5c422922a7c1bc60d9e5d7ffbb6e3136a1e0bc27e6c2bf90"} Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.899271 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"9acab649-6a00-44d1-ab58-501d4059248c","Type":"ContainerStarted","Data":"62bdc803b29f847c1296542fa6e1117b35fea9f2a46aca3cb97b1fc5954a56c5"} Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.901384 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"32c33a83-8802-40c5-94ac-8943e8e5df5f","Type":"ContainerStarted","Data":"70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b"} Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.901416 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"32c33a83-8802-40c5-94ac-8943e8e5df5f","Type":"ContainerStarted","Data":"d3e473e3c650f08b27eec5006cb6962d57b6557e62c893abeb35e8f7cf14a45c"} Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.905162 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0987bc27-2528-4462-bcd5-0941ec12bef4","Type":"ContainerStarted","Data":"a389eef2c3e0a96e4d12701462d49828608a25645a60d4ee7ee518881b050c1e"} Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.905235 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0987bc27-2528-4462-bcd5-0941ec12bef4","Type":"ContainerStarted","Data":"d5c3de7719f40d45a4d342b029b78df9d105f54ea909241cfe57f636e35ecf42"} Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.907864 4893 generic.go:334] "Generic (PLEG): container finished" podID="3174b1b5-9a6c-4e14-816a-37f03a08aa2e" containerID="7dd3df784a8701dc4d8b5812e39a89c491ec3d14bf8e094506175a1ef2e72d79" exitCode=0 Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.907907 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" event={"ID":"3174b1b5-9a6c-4e14-816a-37f03a08aa2e","Type":"ContainerDied","Data":"7dd3df784a8701dc4d8b5812e39a89c491ec3d14bf8e094506175a1ef2e72d79"} Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.907932 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" event={"ID":"3174b1b5-9a6c-4e14-816a-37f03a08aa2e","Type":"ContainerStarted","Data":"190dbe8cc42a48f49fabb3d275b6000bb9c96bb552cce9c4cd68f34a00bc46ee"} Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.927171 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=1.9271466080000001 podStartE2EDuration="1.927146608s" podCreationTimestamp="2026-01-28 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:01.921875175 +0000 UTC m=+1719.695490223" watchObservedRunningTime="2026-01-28 15:30:01.927146608 +0000 UTC m=+1719.700761636" Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.970201 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.970176935 podStartE2EDuration="1.970176935s" podCreationTimestamp="2026-01-28 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:01.964673165 +0000 UTC m=+1719.738288213" watchObservedRunningTime="2026-01-28 15:30:01.970176935 +0000 UTC m=+1719.743791963" Jan 28 15:30:01 crc kubenswrapper[4893]: I0128 15:30:01.985891 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.985875921 podStartE2EDuration="2.985875921s" podCreationTimestamp="2026-01-28 15:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:01.985656545 +0000 UTC m=+1719.759271593" watchObservedRunningTime="2026-01-28 15:30:01.985875921 +0000 UTC m=+1719.759490949" Jan 28 15:30:03 crc kubenswrapper[4893]: I0128 15:30:03.288103 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" Jan 28 15:30:03 crc kubenswrapper[4893]: I0128 15:30:03.370233 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn8k7\" (UniqueName: \"kubernetes.io/projected/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-kube-api-access-qn8k7\") pod \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\" (UID: \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\") " Jan 28 15:30:03 crc kubenswrapper[4893]: I0128 15:30:03.370408 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-config-volume\") pod \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\" (UID: \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\") " Jan 28 15:30:03 crc kubenswrapper[4893]: I0128 15:30:03.370464 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-secret-volume\") pod \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\" (UID: \"3174b1b5-9a6c-4e14-816a-37f03a08aa2e\") " Jan 28 15:30:03 crc kubenswrapper[4893]: I0128 15:30:03.372255 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-config-volume" (OuterVolumeSpecName: "config-volume") pod "3174b1b5-9a6c-4e14-816a-37f03a08aa2e" (UID: "3174b1b5-9a6c-4e14-816a-37f03a08aa2e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:03 crc kubenswrapper[4893]: I0128 15:30:03.377198 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3174b1b5-9a6c-4e14-816a-37f03a08aa2e" (UID: "3174b1b5-9a6c-4e14-816a-37f03a08aa2e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:30:03 crc kubenswrapper[4893]: I0128 15:30:03.378307 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-kube-api-access-qn8k7" (OuterVolumeSpecName: "kube-api-access-qn8k7") pod "3174b1b5-9a6c-4e14-816a-37f03a08aa2e" (UID: "3174b1b5-9a6c-4e14-816a-37f03a08aa2e"). InnerVolumeSpecName "kube-api-access-qn8k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:30:03 crc kubenswrapper[4893]: I0128 15:30:03.473655 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn8k7\" (UniqueName: \"kubernetes.io/projected/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-kube-api-access-qn8k7\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:03 crc kubenswrapper[4893]: I0128 15:30:03.473708 4893 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:03 crc kubenswrapper[4893]: I0128 15:30:03.473721 4893 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3174b1b5-9a6c-4e14-816a-37f03a08aa2e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:03 crc kubenswrapper[4893]: I0128 15:30:03.947809 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" event={"ID":"3174b1b5-9a6c-4e14-816a-37f03a08aa2e","Type":"ContainerDied","Data":"190dbe8cc42a48f49fabb3d275b6000bb9c96bb552cce9c4cd68f34a00bc46ee"} Jan 28 15:30:03 crc kubenswrapper[4893]: I0128 15:30:03.947866 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="190dbe8cc42a48f49fabb3d275b6000bb9c96bb552cce9c4cd68f34a00bc46ee" Jan 28 15:30:03 crc kubenswrapper[4893]: I0128 15:30:03.947905 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-qkwd4" Jan 28 15:30:05 crc kubenswrapper[4893]: I0128 15:30:05.434539 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:30:05 crc kubenswrapper[4893]: I0128 15:30:05.582117 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:05 crc kubenswrapper[4893]: I0128 15:30:05.582177 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:10 crc kubenswrapper[4893]: I0128 15:30:10.342731 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:10 crc kubenswrapper[4893]: I0128 15:30:10.343348 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:10 crc kubenswrapper[4893]: I0128 15:30:10.434123 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:30:10 crc kubenswrapper[4893]: I0128 15:30:10.459792 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:30:10 crc kubenswrapper[4893]: I0128 15:30:10.582054 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:10 crc kubenswrapper[4893]: I0128 15:30:10.582114 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:11 crc kubenswrapper[4893]: I0128 15:30:11.025361 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:30:11 crc kubenswrapper[4893]: I0128 15:30:11.424703 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="0987bc27-2528-4462-bcd5-0941ec12bef4" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.163:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:30:11 crc kubenswrapper[4893]: I0128 15:30:11.424719 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="0987bc27-2528-4462-bcd5-0941ec12bef4" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.163:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:30:11 crc kubenswrapper[4893]: I0128 15:30:11.664695 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="9acab649-6a00-44d1-ab58-501d4059248c" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.165:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:30:11 crc kubenswrapper[4893]: I0128 15:30:11.664694 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="9acab649-6a00-44d1-ab58-501d4059248c" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.165:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:30:12 crc kubenswrapper[4893]: I0128 15:30:12.898619 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:30:12 crc kubenswrapper[4893]: E0128 15:30:12.899218 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:30:20 crc kubenswrapper[4893]: I0128 15:30:20.356299 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:20 crc kubenswrapper[4893]: I0128 15:30:20.357190 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:20 crc kubenswrapper[4893]: I0128 15:30:20.359073 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:20 crc kubenswrapper[4893]: I0128 15:30:20.378025 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:20 crc kubenswrapper[4893]: I0128 15:30:20.584211 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:20 crc kubenswrapper[4893]: I0128 15:30:20.585895 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:20 crc kubenswrapper[4893]: I0128 15:30:20.596435 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:21 crc kubenswrapper[4893]: I0128 15:30:21.082626 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:21 crc kubenswrapper[4893]: I0128 15:30:21.089127 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:30:21 crc kubenswrapper[4893]: I0128 15:30:21.090715 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:30:23 crc kubenswrapper[4893]: I0128 15:30:23.742882 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 28 15:30:23 crc kubenswrapper[4893]: E0128 15:30:23.743901 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3174b1b5-9a6c-4e14-816a-37f03a08aa2e" containerName="collect-profiles" Jan 28 15:30:23 crc kubenswrapper[4893]: I0128 15:30:23.743918 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3174b1b5-9a6c-4e14-816a-37f03a08aa2e" containerName="collect-profiles" Jan 28 15:30:23 crc kubenswrapper[4893]: I0128 15:30:23.744128 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3174b1b5-9a6c-4e14-816a-37f03a08aa2e" containerName="collect-profiles" Jan 28 15:30:23 crc kubenswrapper[4893]: I0128 15:30:23.745620 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:23 crc kubenswrapper[4893]: I0128 15:30:23.752195 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 28 15:30:23 crc kubenswrapper[4893]: I0128 15:30:23.754354 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:23 crc kubenswrapper[4893]: I0128 15:30:23.770531 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 28 15:30:23 crc kubenswrapper[4893]: I0128 15:30:23.792857 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 28 15:30:23 crc kubenswrapper[4893]: I0128 15:30:23.915973 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp62d\" (UniqueName: \"kubernetes.io/projected/747647df-2218-4f3a-a1f4-132b662282ac-kube-api-access-kp62d\") pod \"nova-kuttl-api-2\" (UID: \"747647df-2218-4f3a-a1f4-132b662282ac\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:23 crc kubenswrapper[4893]: I0128 15:30:23.916568 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747647df-2218-4f3a-a1f4-132b662282ac-config-data\") pod \"nova-kuttl-api-2\" (UID: \"747647df-2218-4f3a-a1f4-132b662282ac\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:23 crc kubenswrapper[4893]: I0128 15:30:23.916628 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzfzr\" (UniqueName: \"kubernetes.io/projected/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-kube-api-access-wzfzr\") pod \"nova-kuttl-api-1\" (UID: \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:23 crc kubenswrapper[4893]: I0128 15:30:23.916663 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/747647df-2218-4f3a-a1f4-132b662282ac-logs\") pod \"nova-kuttl-api-2\" (UID: \"747647df-2218-4f3a-a1f4-132b662282ac\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:23 crc kubenswrapper[4893]: I0128 15:30:23.916691 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-config-data\") pod \"nova-kuttl-api-1\" (UID: \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:23 crc kubenswrapper[4893]: I0128 15:30:23.916790 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-logs\") pod \"nova-kuttl-api-1\" (UID: \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.018928 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-logs\") pod \"nova-kuttl-api-1\" (UID: \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.019054 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp62d\" (UniqueName: \"kubernetes.io/projected/747647df-2218-4f3a-a1f4-132b662282ac-kube-api-access-kp62d\") pod \"nova-kuttl-api-2\" (UID: \"747647df-2218-4f3a-a1f4-132b662282ac\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.019172 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747647df-2218-4f3a-a1f4-132b662282ac-config-data\") pod \"nova-kuttl-api-2\" (UID: \"747647df-2218-4f3a-a1f4-132b662282ac\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.019252 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzfzr\" (UniqueName: \"kubernetes.io/projected/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-kube-api-access-wzfzr\") pod \"nova-kuttl-api-1\" (UID: \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.019293 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/747647df-2218-4f3a-a1f4-132b662282ac-logs\") pod \"nova-kuttl-api-2\" (UID: \"747647df-2218-4f3a-a1f4-132b662282ac\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.019324 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-config-data\") pod \"nova-kuttl-api-1\" (UID: \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.020063 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/747647df-2218-4f3a-a1f4-132b662282ac-logs\") pod \"nova-kuttl-api-2\" (UID: \"747647df-2218-4f3a-a1f4-132b662282ac\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.020358 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-logs\") pod \"nova-kuttl-api-1\" (UID: \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.037405 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747647df-2218-4f3a-a1f4-132b662282ac-config-data\") pod \"nova-kuttl-api-2\" (UID: \"747647df-2218-4f3a-a1f4-132b662282ac\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.039942 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-config-data\") pod \"nova-kuttl-api-1\" (UID: \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.045775 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp62d\" (UniqueName: \"kubernetes.io/projected/747647df-2218-4f3a-a1f4-132b662282ac-kube-api-access-kp62d\") pod \"nova-kuttl-api-2\" (UID: \"747647df-2218-4f3a-a1f4-132b662282ac\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.046833 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzfzr\" (UniqueName: \"kubernetes.io/projected/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-kube-api-access-wzfzr\") pod \"nova-kuttl-api-1\" (UID: \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.070012 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.084706 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.141013 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.142110 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.165901 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.167393 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.185806 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.200555 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.226611 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trld7\" (UniqueName: \"kubernetes.io/projected/e0b48ebb-b212-4370-aee1-db0d64b7a446-kube-api-access-trld7\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"e0b48ebb-b212-4370-aee1-db0d64b7a446\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.226793 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b48ebb-b212-4370-aee1-db0d64b7a446-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"e0b48ebb-b212-4370-aee1-db0d64b7a446\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.226818 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k45hk\" (UniqueName: \"kubernetes.io/projected/4802233c-9a5c-4074-b1ae-df434de29109-kube-api-access-k45hk\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"4802233c-9a5c-4074-b1ae-df434de29109\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.226939 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4802233c-9a5c-4074-b1ae-df434de29109-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"4802233c-9a5c-4074-b1ae-df434de29109\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.328148 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4802233c-9a5c-4074-b1ae-df434de29109-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"4802233c-9a5c-4074-b1ae-df434de29109\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.328622 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trld7\" (UniqueName: \"kubernetes.io/projected/e0b48ebb-b212-4370-aee1-db0d64b7a446-kube-api-access-trld7\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"e0b48ebb-b212-4370-aee1-db0d64b7a446\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.328649 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b48ebb-b212-4370-aee1-db0d64b7a446-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"e0b48ebb-b212-4370-aee1-db0d64b7a446\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.328672 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k45hk\" (UniqueName: \"kubernetes.io/projected/4802233c-9a5c-4074-b1ae-df434de29109-kube-api-access-k45hk\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"4802233c-9a5c-4074-b1ae-df434de29109\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.339773 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b48ebb-b212-4370-aee1-db0d64b7a446-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"e0b48ebb-b212-4370-aee1-db0d64b7a446\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.339773 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4802233c-9a5c-4074-b1ae-df434de29109-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"4802233c-9a5c-4074-b1ae-df434de29109\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.350578 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trld7\" (UniqueName: \"kubernetes.io/projected/e0b48ebb-b212-4370-aee1-db0d64b7a446-kube-api-access-trld7\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"e0b48ebb-b212-4370-aee1-db0d64b7a446\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.357206 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k45hk\" (UniqueName: \"kubernetes.io/projected/4802233c-9a5c-4074-b1ae-df434de29109-kube-api-access-k45hk\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"4802233c-9a5c-4074-b1ae-df434de29109\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.500942 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.514648 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.597316 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 28 15:30:24 crc kubenswrapper[4893]: W0128 15:30:24.599130 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod747647df_2218_4f3a_a1f4_132b662282ac.slice/crio-9c2db18a294b2df4bbee053880f8fdf26ad3db38bdd12b9a5729e469b0f1e2f0 WatchSource:0}: Error finding container 9c2db18a294b2df4bbee053880f8fdf26ad3db38bdd12b9a5729e469b0f1e2f0: Status 404 returned error can't find the container with id 9c2db18a294b2df4bbee053880f8fdf26ad3db38bdd12b9a5729e469b0f1e2f0 Jan 28 15:30:24 crc kubenswrapper[4893]: I0128 15:30:24.720202 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 28 15:30:25 crc kubenswrapper[4893]: I0128 15:30:25.084536 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 28 15:30:25 crc kubenswrapper[4893]: W0128 15:30:25.099711 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0b48ebb_b212_4370_aee1_db0d64b7a446.slice/crio-3477a0ae4b85374e8799c93282859e7d93db41810da20c4e831dac870ea39e8e WatchSource:0}: Error finding container 3477a0ae4b85374e8799c93282859e7d93db41810da20c4e831dac870ea39e8e: Status 404 returned error can't find the container with id 3477a0ae4b85374e8799c93282859e7d93db41810da20c4e831dac870ea39e8e Jan 28 15:30:25 crc kubenswrapper[4893]: I0128 15:30:25.153713 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224","Type":"ContainerStarted","Data":"04b679bf0479c79c4fb9c19a7e81a8f1dc35a6e8ad6cf96a21e796fa413cb978"} Jan 28 15:30:25 crc kubenswrapper[4893]: I0128 15:30:25.159787 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224","Type":"ContainerStarted","Data":"1bea29233d0b8710c90b84d31692a1ddf71558908334349210428f386953073a"} Jan 28 15:30:25 crc kubenswrapper[4893]: I0128 15:30:25.169585 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"e0b48ebb-b212-4370-aee1-db0d64b7a446","Type":"ContainerStarted","Data":"3477a0ae4b85374e8799c93282859e7d93db41810da20c4e831dac870ea39e8e"} Jan 28 15:30:25 crc kubenswrapper[4893]: I0128 15:30:25.193794 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"747647df-2218-4f3a-a1f4-132b662282ac","Type":"ContainerStarted","Data":"fb85e94d3f0b9e99f32ab547b316fe7b14954907a33b110aadebc972e7e16158"} Jan 28 15:30:25 crc kubenswrapper[4893]: I0128 15:30:25.193873 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"747647df-2218-4f3a-a1f4-132b662282ac","Type":"ContainerStarted","Data":"1426fde4640919ecaf8ccf0dab3e2fb29e0780a44171888b713852ed766e1baf"} Jan 28 15:30:25 crc kubenswrapper[4893]: I0128 15:30:25.193887 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"747647df-2218-4f3a-a1f4-132b662282ac","Type":"ContainerStarted","Data":"9c2db18a294b2df4bbee053880f8fdf26ad3db38bdd12b9a5729e469b0f1e2f0"} Jan 28 15:30:25 crc kubenswrapper[4893]: I0128 15:30:25.211698 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 28 15:30:25 crc kubenswrapper[4893]: I0128 15:30:25.235915 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-2" podStartSLOduration=2.235888737 podStartE2EDuration="2.235888737s" podCreationTimestamp="2026-01-28 15:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:25.225966688 +0000 UTC m=+1742.999581736" watchObservedRunningTime="2026-01-28 15:30:25.235888737 +0000 UTC m=+1743.009503765" Jan 28 15:30:25 crc kubenswrapper[4893]: W0128 15:30:25.275671 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4802233c_9a5c_4074_b1ae_df434de29109.slice/crio-aa982390bba057ac9ef2eeb432a774c43afc693db5b8fe9d613cfdf16e0210a2 WatchSource:0}: Error finding container aa982390bba057ac9ef2eeb432a774c43afc693db5b8fe9d613cfdf16e0210a2: Status 404 returned error can't find the container with id aa982390bba057ac9ef2eeb432a774c43afc693db5b8fe9d613cfdf16e0210a2 Jan 28 15:30:25 crc kubenswrapper[4893]: I0128 15:30:25.892134 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:30:25 crc kubenswrapper[4893]: E0128 15:30:25.892526 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:30:26 crc kubenswrapper[4893]: I0128 15:30:26.203370 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"4802233c-9a5c-4074-b1ae-df434de29109","Type":"ContainerStarted","Data":"74d304226fd75a4afafe105c8fa169b4b6acdfdc99f465f2d566223b8af4b04b"} Jan 28 15:30:26 crc kubenswrapper[4893]: I0128 15:30:26.203826 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 15:30:26 crc kubenswrapper[4893]: I0128 15:30:26.203840 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"4802233c-9a5c-4074-b1ae-df434de29109","Type":"ContainerStarted","Data":"aa982390bba057ac9ef2eeb432a774c43afc693db5b8fe9d613cfdf16e0210a2"} Jan 28 15:30:26 crc kubenswrapper[4893]: I0128 15:30:26.206805 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"e0b48ebb-b212-4370-aee1-db0d64b7a446","Type":"ContainerStarted","Data":"d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934"} Jan 28 15:30:26 crc kubenswrapper[4893]: I0128 15:30:26.206941 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 15:30:26 crc kubenswrapper[4893]: I0128 15:30:26.209164 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224","Type":"ContainerStarted","Data":"08fd1ab72835c2c50055716eb6cea06b76157b3f5c4e049073f2c8d2c0ffaa84"} Jan 28 15:30:26 crc kubenswrapper[4893]: I0128 15:30:26.227342 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podStartSLOduration=2.227274688 podStartE2EDuration="2.227274688s" podCreationTimestamp="2026-01-28 15:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:26.22219964 +0000 UTC m=+1743.995814688" watchObservedRunningTime="2026-01-28 15:30:26.227274688 +0000 UTC m=+1744.000889716" Jan 28 15:30:26 crc kubenswrapper[4893]: I0128 15:30:26.249293 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-1" podStartSLOduration=3.249270005 podStartE2EDuration="3.249270005s" podCreationTimestamp="2026-01-28 15:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:26.239514239 +0000 UTC m=+1744.013129287" watchObservedRunningTime="2026-01-28 15:30:26.249270005 +0000 UTC m=+1744.022885033" Jan 28 15:30:26 crc kubenswrapper[4893]: I0128 15:30:26.260114 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podStartSLOduration=2.260089758 podStartE2EDuration="2.260089758s" podCreationTimestamp="2026-01-28 15:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:26.256660374 +0000 UTC m=+1744.030275402" watchObservedRunningTime="2026-01-28 15:30:26.260089758 +0000 UTC m=+1744.033704786" Jan 28 15:30:30 crc kubenswrapper[4893]: I0128 15:30:30.098589 4893 scope.go:117] "RemoveContainer" containerID="74498b77bba1d94b0a19263b1a8fa9e29130ed2a540a3b9b6be1b5683d6d05d5" Jan 28 15:30:30 crc kubenswrapper[4893]: I0128 15:30:30.123261 4893 scope.go:117] "RemoveContainer" containerID="a485ee8e12c459d8af7239e69f3391a6c16707d1bc205dff4ce305c4df08f5bc" Jan 28 15:30:30 crc kubenswrapper[4893]: I0128 15:30:30.159710 4893 scope.go:117] "RemoveContainer" containerID="e2034b4c6cd41a77010538147a26d8aa68deff1ff7c96bc23905bd2b86fd6c85" Jan 28 15:30:30 crc kubenswrapper[4893]: I0128 15:30:30.190517 4893 scope.go:117] "RemoveContainer" containerID="e2ebfd55fda8709ff21ec3e56771802b492867dab184dea83ac0e1b77c818d91" Jan 28 15:30:30 crc kubenswrapper[4893]: I0128 15:30:30.224829 4893 scope.go:117] "RemoveContainer" containerID="c5b3d8589b4a8429e006febea6cba25c1c54675226a49dc44067e444ce1b9931" Jan 28 15:30:30 crc kubenswrapper[4893]: I0128 15:30:30.286991 4893 scope.go:117] "RemoveContainer" containerID="120c9e924930b9187c740f4ecd27062cd24aeebfc246671da3d9bf133203c2b5" Jan 28 15:30:34 crc kubenswrapper[4893]: I0128 15:30:34.070317 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:34 crc kubenswrapper[4893]: I0128 15:30:34.070690 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:34 crc kubenswrapper[4893]: I0128 15:30:34.085436 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:34 crc kubenswrapper[4893]: I0128 15:30:34.085515 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:34 crc kubenswrapper[4893]: I0128 15:30:34.535681 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 15:30:34 crc kubenswrapper[4893]: I0128 15:30:34.547232 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 15:30:35 crc kubenswrapper[4893]: I0128 15:30:35.111758 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:30:35 crc kubenswrapper[4893]: I0128 15:30:35.236863 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="747647df-2218-4f3a-a1f4-132b662282ac" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.168:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:30:35 crc kubenswrapper[4893]: I0128 15:30:35.236878 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="747647df-2218-4f3a-a1f4-132b662282ac" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.168:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:30:35 crc kubenswrapper[4893]: I0128 15:30:35.236871 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:30:35 crc kubenswrapper[4893]: I0128 15:30:35.937197 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 28 15:30:35 crc kubenswrapper[4893]: I0128 15:30:35.939718 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 15:30:35 crc kubenswrapper[4893]: I0128 15:30:35.956233 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 28 15:30:35 crc kubenswrapper[4893]: I0128 15:30:35.958151 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 15:30:35 crc kubenswrapper[4893]: I0128 15:30:35.974149 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 28 15:30:35 crc kubenswrapper[4893]: I0128 15:30:35.996317 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.028553 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.038452 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.038617 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.039924 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.044292 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd2f429-3780-4f91-8eeb-fab736e0ff82-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"8bd2f429-3780-4f91-8eeb-fab736e0ff82\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.044364 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d90cbda0-f31c-4d68-8a5d-288d1e651c62-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"d90cbda0-f31c-4d68-8a5d-288d1e651c62\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.044490 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v984h\" (UniqueName: \"kubernetes.io/projected/8bd2f429-3780-4f91-8eeb-fab736e0ff82-kube-api-access-v984h\") pod \"nova-kuttl-scheduler-1\" (UID: \"8bd2f429-3780-4f91-8eeb-fab736e0ff82\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.044521 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnnp9\" (UniqueName: \"kubernetes.io/projected/d90cbda0-f31c-4d68-8a5d-288d1e651c62-kube-api-access-tnnp9\") pod \"nova-kuttl-scheduler-2\" (UID: \"d90cbda0-f31c-4d68-8a5d-288d1e651c62\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.067470 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.079645 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.145812 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.145872 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b747a8ff-f3ba-438c-84a5-2175ace99287-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"b747a8ff-f3ba-438c-84a5-2175ace99287\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.145914 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d90cbda0-f31c-4d68-8a5d-288d1e651c62-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"d90cbda0-f31c-4d68-8a5d-288d1e651c62\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.145974 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.146021 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v984h\" (UniqueName: \"kubernetes.io/projected/8bd2f429-3780-4f91-8eeb-fab736e0ff82-kube-api-access-v984h\") pod \"nova-kuttl-scheduler-1\" (UID: \"8bd2f429-3780-4f91-8eeb-fab736e0ff82\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.146050 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnnp9\" (UniqueName: \"kubernetes.io/projected/d90cbda0-f31c-4d68-8a5d-288d1e651c62-kube-api-access-tnnp9\") pod \"nova-kuttl-scheduler-2\" (UID: \"d90cbda0-f31c-4d68-8a5d-288d1e651c62\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.146076 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg5rr\" (UniqueName: \"kubernetes.io/projected/b747a8ff-f3ba-438c-84a5-2175ace99287-kube-api-access-tg5rr\") pod \"nova-kuttl-metadata-2\" (UID: \"b747a8ff-f3ba-438c-84a5-2175ace99287\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.146139 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b747a8ff-f3ba-438c-84a5-2175ace99287-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"b747a8ff-f3ba-438c-84a5-2175ace99287\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.146176 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx2rp\" (UniqueName: \"kubernetes.io/projected/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-kube-api-access-sx2rp\") pod \"nova-kuttl-metadata-1\" (UID: \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.146205 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd2f429-3780-4f91-8eeb-fab736e0ff82-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"8bd2f429-3780-4f91-8eeb-fab736e0ff82\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.155158 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d90cbda0-f31c-4d68-8a5d-288d1e651c62-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"d90cbda0-f31c-4d68-8a5d-288d1e651c62\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.167176 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd2f429-3780-4f91-8eeb-fab736e0ff82-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"8bd2f429-3780-4f91-8eeb-fab736e0ff82\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.179024 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v984h\" (UniqueName: \"kubernetes.io/projected/8bd2f429-3780-4f91-8eeb-fab736e0ff82-kube-api-access-v984h\") pod \"nova-kuttl-scheduler-1\" (UID: \"8bd2f429-3780-4f91-8eeb-fab736e0ff82\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.190144 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnnp9\" (UniqueName: \"kubernetes.io/projected/d90cbda0-f31c-4d68-8a5d-288d1e651c62-kube-api-access-tnnp9\") pod \"nova-kuttl-scheduler-2\" (UID: \"d90cbda0-f31c-4d68-8a5d-288d1e651c62\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.247397 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg5rr\" (UniqueName: \"kubernetes.io/projected/b747a8ff-f3ba-438c-84a5-2175ace99287-kube-api-access-tg5rr\") pod \"nova-kuttl-metadata-2\" (UID: \"b747a8ff-f3ba-438c-84a5-2175ace99287\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.247498 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b747a8ff-f3ba-438c-84a5-2175ace99287-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"b747a8ff-f3ba-438c-84a5-2175ace99287\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.247536 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx2rp\" (UniqueName: \"kubernetes.io/projected/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-kube-api-access-sx2rp\") pod \"nova-kuttl-metadata-1\" (UID: \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.247570 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.247585 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b747a8ff-f3ba-438c-84a5-2175ace99287-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"b747a8ff-f3ba-438c-84a5-2175ace99287\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.247634 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.248351 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.248491 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b747a8ff-f3ba-438c-84a5-2175ace99287-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"b747a8ff-f3ba-438c-84a5-2175ace99287\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.262207 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b747a8ff-f3ba-438c-84a5-2175ace99287-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"b747a8ff-f3ba-438c-84a5-2175ace99287\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.262208 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.266429 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.279902 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx2rp\" (UniqueName: \"kubernetes.io/projected/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-kube-api-access-sx2rp\") pod \"nova-kuttl-metadata-1\" (UID: \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.291345 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg5rr\" (UniqueName: \"kubernetes.io/projected/b747a8ff-f3ba-438c-84a5-2175ace99287-kube-api-access-tg5rr\") pod \"nova-kuttl-metadata-2\" (UID: \"b747a8ff-f3ba-438c-84a5-2175ace99287\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.301921 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.376918 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.382971 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:36 crc kubenswrapper[4893]: I0128 15:30:36.882389 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.028928 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.107324 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.122453 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.406038 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"d90cbda0-f31c-4d68-8a5d-288d1e651c62","Type":"ContainerStarted","Data":"fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4"} Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.406558 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"d90cbda0-f31c-4d68-8a5d-288d1e651c62","Type":"ContainerStarted","Data":"a662ee0e42db5acb617ca42d9ff0b260d664bc32ae9f3b1c4d39a94103ca1259"} Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.415466 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"b747a8ff-f3ba-438c-84a5-2175ace99287","Type":"ContainerStarted","Data":"9deb24d2a03765eb443d2d00b347f94bedb035551d2cd92297fd642e804f79a2"} Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.415546 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"b747a8ff-f3ba-438c-84a5-2175ace99287","Type":"ContainerStarted","Data":"2a883dbbfd799785d65c97e25284f93b7b3e41c4de53a14f38bd854bedae3dc3"} Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.426435 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podStartSLOduration=2.42641266 podStartE2EDuration="2.42641266s" podCreationTimestamp="2026-01-28 15:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:37.423654405 +0000 UTC m=+1755.197269433" watchObservedRunningTime="2026-01-28 15:30:37.42641266 +0000 UTC m=+1755.200027698" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.432046 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"c2a91c2c-ade3-4dd7-983a-49eda7bf545b","Type":"ContainerStarted","Data":"4e56d304cd1b510fe02ccca7c135cc7019afafa6cb02dd0f61b3c6406cf1320d"} Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.432106 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"c2a91c2c-ade3-4dd7-983a-49eda7bf545b","Type":"ContainerStarted","Data":"8aff4536873b55938df332c7ff24d99fbf8ab71dbc21d0c6acce3716349e77ab"} Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.465052 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"8bd2f429-3780-4f91-8eeb-fab736e0ff82","Type":"ContainerStarted","Data":"1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d"} Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.465114 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"8bd2f429-3780-4f91-8eeb-fab736e0ff82","Type":"ContainerStarted","Data":"3850966a8f8588555cd16a2710ab50e90380400ef7183c6e5f3973a1695c0833"} Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.516375 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podStartSLOduration=2.516346489 podStartE2EDuration="2.516346489s" podCreationTimestamp="2026-01-28 15:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:37.50425526 +0000 UTC m=+1755.277870298" watchObservedRunningTime="2026-01-28 15:30:37.516346489 +0000 UTC m=+1755.289961517" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.617275 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.618934 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.637407 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.642090 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.659595 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.683682 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.706983 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.707065 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ddcf88-0be9-47cb-9301-0618b47d28f6-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"03ddcf88-0be9-47cb-9301-0618b47d28f6\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.707087 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm6vb\" (UniqueName: \"kubernetes.io/projected/e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6-kube-api-access-mm6vb\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.707133 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g26nr\" (UniqueName: \"kubernetes.io/projected/03ddcf88-0be9-47cb-9301-0618b47d28f6-kube-api-access-g26nr\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"03ddcf88-0be9-47cb-9301-0618b47d28f6\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.809305 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g26nr\" (UniqueName: \"kubernetes.io/projected/03ddcf88-0be9-47cb-9301-0618b47d28f6-kube-api-access-g26nr\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"03ddcf88-0be9-47cb-9301-0618b47d28f6\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.809394 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.809451 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ddcf88-0be9-47cb-9301-0618b47d28f6-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"03ddcf88-0be9-47cb-9301-0618b47d28f6\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.809484 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm6vb\" (UniqueName: \"kubernetes.io/projected/e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6-kube-api-access-mm6vb\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.814373 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ddcf88-0be9-47cb-9301-0618b47d28f6-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"03ddcf88-0be9-47cb-9301-0618b47d28f6\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.817276 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.828011 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm6vb\" (UniqueName: \"kubernetes.io/projected/e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6-kube-api-access-mm6vb\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.828516 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g26nr\" (UniqueName: \"kubernetes.io/projected/03ddcf88-0be9-47cb-9301-0618b47d28f6-kube-api-access-g26nr\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"03ddcf88-0be9-47cb-9301-0618b47d28f6\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 15:30:37 crc kubenswrapper[4893]: I0128 15:30:37.974315 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 15:30:38 crc kubenswrapper[4893]: I0128 15:30:38.002692 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 15:30:38 crc kubenswrapper[4893]: I0128 15:30:38.477237 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"b747a8ff-f3ba-438c-84a5-2175ace99287","Type":"ContainerStarted","Data":"abae1f18a2baaf05ac990b2a23424bc8e1f5ffe2c192a429562cafb9405b5e86"} Jan 28 15:30:38 crc kubenswrapper[4893]: I0128 15:30:38.482162 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"c2a91c2c-ade3-4dd7-983a-49eda7bf545b","Type":"ContainerStarted","Data":"a624bf19f62397cc4382015857d1007ed36b83c5e86bb8f1e88e57e8a4a7396b"} Jan 28 15:30:38 crc kubenswrapper[4893]: I0128 15:30:38.501326 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-2" podStartSLOduration=3.501298615 podStartE2EDuration="3.501298615s" podCreationTimestamp="2026-01-28 15:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:38.499053914 +0000 UTC m=+1756.272668962" watchObservedRunningTime="2026-01-28 15:30:38.501298615 +0000 UTC m=+1756.274913663" Jan 28 15:30:38 crc kubenswrapper[4893]: I0128 15:30:38.528376 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 28 15:30:38 crc kubenswrapper[4893]: I0128 15:30:38.536813 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-1" podStartSLOduration=3.536790258 podStartE2EDuration="3.536790258s" podCreationTimestamp="2026-01-28 15:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:38.518371608 +0000 UTC m=+1756.291986646" watchObservedRunningTime="2026-01-28 15:30:38.536790258 +0000 UTC m=+1756.310405286" Jan 28 15:30:38 crc kubenswrapper[4893]: W0128 15:30:38.543681 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode92cc5e6_dd20_4dde_9a09_7c8fbc646ca6.slice/crio-96621e7c15f530958ab0791578db6f1608d39221bacff989d097dc4a09352bbb WatchSource:0}: Error finding container 96621e7c15f530958ab0791578db6f1608d39221bacff989d097dc4a09352bbb: Status 404 returned error can't find the container with id 96621e7c15f530958ab0791578db6f1608d39221bacff989d097dc4a09352bbb Jan 28 15:30:38 crc kubenswrapper[4893]: I0128 15:30:38.594630 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 28 15:30:39 crc kubenswrapper[4893]: I0128 15:30:39.494256 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6","Type":"ContainerStarted","Data":"5b63da1cde1ef3fdafcac07f15ba29288fe9f299f96d3a3211aaf65902f07096"} Jan 28 15:30:39 crc kubenswrapper[4893]: I0128 15:30:39.494595 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 15:30:39 crc kubenswrapper[4893]: I0128 15:30:39.494610 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6","Type":"ContainerStarted","Data":"96621e7c15f530958ab0791578db6f1608d39221bacff989d097dc4a09352bbb"} Jan 28 15:30:39 crc kubenswrapper[4893]: I0128 15:30:39.496709 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"03ddcf88-0be9-47cb-9301-0618b47d28f6","Type":"ContainerStarted","Data":"3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed"} Jan 28 15:30:39 crc kubenswrapper[4893]: I0128 15:30:39.496797 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"03ddcf88-0be9-47cb-9301-0618b47d28f6","Type":"ContainerStarted","Data":"e0718ac535b33f273fc0174379609b39342330c56baaf09544ae02b0baebbc8d"} Jan 28 15:30:39 crc kubenswrapper[4893]: I0128 15:30:39.526584 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podStartSLOduration=2.526552824 podStartE2EDuration="2.526552824s" podCreationTimestamp="2026-01-28 15:30:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:39.518906647 +0000 UTC m=+1757.292521675" watchObservedRunningTime="2026-01-28 15:30:39.526552824 +0000 UTC m=+1757.300167852" Jan 28 15:30:39 crc kubenswrapper[4893]: I0128 15:30:39.549149 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podStartSLOduration=2.549127856 podStartE2EDuration="2.549127856s" podCreationTimestamp="2026-01-28 15:30:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:30:39.548435198 +0000 UTC m=+1757.322050246" watchObservedRunningTime="2026-01-28 15:30:39.549127856 +0000 UTC m=+1757.322742884" Jan 28 15:30:40 crc kubenswrapper[4893]: I0128 15:30:40.510620 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 15:30:40 crc kubenswrapper[4893]: I0128 15:30:40.892826 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:30:40 crc kubenswrapper[4893]: E0128 15:30:40.893054 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:30:41 crc kubenswrapper[4893]: I0128 15:30:41.267815 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 15:30:41 crc kubenswrapper[4893]: I0128 15:30:41.303351 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 15:30:41 crc kubenswrapper[4893]: I0128 15:30:41.378373 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:41 crc kubenswrapper[4893]: I0128 15:30:41.379168 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:41 crc kubenswrapper[4893]: I0128 15:30:41.383132 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:41 crc kubenswrapper[4893]: I0128 15:30:41.383211 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:43 crc kubenswrapper[4893]: I0128 15:30:43.038246 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 15:30:44 crc kubenswrapper[4893]: I0128 15:30:44.077134 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:44 crc kubenswrapper[4893]: I0128 15:30:44.078140 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:44 crc kubenswrapper[4893]: I0128 15:30:44.080651 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:44 crc kubenswrapper[4893]: I0128 15:30:44.084951 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:44 crc kubenswrapper[4893]: I0128 15:30:44.098592 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:44 crc kubenswrapper[4893]: I0128 15:30:44.099603 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:44 crc kubenswrapper[4893]: I0128 15:30:44.101107 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:44 crc kubenswrapper[4893]: I0128 15:30:44.104291 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:44 crc kubenswrapper[4893]: I0128 15:30:44.543088 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:44 crc kubenswrapper[4893]: I0128 15:30:44.543165 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:44 crc kubenswrapper[4893]: I0128 15:30:44.548332 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:30:44 crc kubenswrapper[4893]: I0128 15:30:44.549147 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:30:46 crc kubenswrapper[4893]: I0128 15:30:46.268051 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 15:30:46 crc kubenswrapper[4893]: I0128 15:30:46.303920 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 15:30:46 crc kubenswrapper[4893]: I0128 15:30:46.378765 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:46 crc kubenswrapper[4893]: I0128 15:30:46.378845 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:46 crc kubenswrapper[4893]: I0128 15:30:46.383187 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:46 crc kubenswrapper[4893]: I0128 15:30:46.383373 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:46 crc kubenswrapper[4893]: I0128 15:30:46.464531 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 15:30:46 crc kubenswrapper[4893]: I0128 15:30:46.467164 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 15:30:46 crc kubenswrapper[4893]: I0128 15:30:46.590897 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 15:30:46 crc kubenswrapper[4893]: I0128 15:30:46.604245 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 15:30:47 crc kubenswrapper[4893]: I0128 15:30:47.545806 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.174:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:30:47 crc kubenswrapper[4893]: I0128 15:30:47.545920 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="b747a8ff-f3ba-438c-84a5-2175ace99287" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.173:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:30:47 crc kubenswrapper[4893]: I0128 15:30:47.546002 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.174:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:30:47 crc kubenswrapper[4893]: I0128 15:30:47.545952 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="b747a8ff-f3ba-438c-84a5-2175ace99287" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.173:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:30:48 crc kubenswrapper[4893]: I0128 15:30:48.008933 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 15:30:54 crc kubenswrapper[4893]: I0128 15:30:54.892712 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:30:54 crc kubenswrapper[4893]: E0128 15:30:54.893915 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:30:56 crc kubenswrapper[4893]: I0128 15:30:56.381599 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:56 crc kubenswrapper[4893]: I0128 15:30:56.384017 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:56 crc kubenswrapper[4893]: I0128 15:30:56.384567 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:56 crc kubenswrapper[4893]: I0128 15:30:56.387366 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:56 crc kubenswrapper[4893]: I0128 15:30:56.389510 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:56 crc kubenswrapper[4893]: I0128 15:30:56.389655 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:56 crc kubenswrapper[4893]: I0128 15:30:56.648569 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:30:56 crc kubenswrapper[4893]: I0128 15:30:56.648936 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:30:57 crc kubenswrapper[4893]: I0128 15:30:57.485271 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 28 15:30:57 crc kubenswrapper[4893]: I0128 15:30:57.485947 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="747647df-2218-4f3a-a1f4-132b662282ac" containerName="nova-kuttl-api-log" containerID="cri-o://1426fde4640919ecaf8ccf0dab3e2fb29e0780a44171888b713852ed766e1baf" gracePeriod=30 Jan 28 15:30:57 crc kubenswrapper[4893]: I0128 15:30:57.486045 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="747647df-2218-4f3a-a1f4-132b662282ac" containerName="nova-kuttl-api-api" containerID="cri-o://fb85e94d3f0b9e99f32ab547b316fe7b14954907a33b110aadebc972e7e16158" gracePeriod=30 Jan 28 15:30:57 crc kubenswrapper[4893]: I0128 15:30:57.498561 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 28 15:30:57 crc kubenswrapper[4893]: I0128 15:30:57.498929 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" containerName="nova-kuttl-api-log" containerID="cri-o://04b679bf0479c79c4fb9c19a7e81a8f1dc35a6e8ad6cf96a21e796fa413cb978" gracePeriod=30 Jan 28 15:30:57 crc kubenswrapper[4893]: I0128 15:30:57.499056 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" containerName="nova-kuttl-api-api" containerID="cri-o://08fd1ab72835c2c50055716eb6cea06b76157b3f5c4e049073f2c8d2c0ffaa84" gracePeriod=30 Jan 28 15:30:57 crc kubenswrapper[4893]: I0128 15:30:57.657620 4893 generic.go:334] "Generic (PLEG): container finished" podID="7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" containerID="04b679bf0479c79c4fb9c19a7e81a8f1dc35a6e8ad6cf96a21e796fa413cb978" exitCode=143 Jan 28 15:30:57 crc kubenswrapper[4893]: I0128 15:30:57.657954 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224","Type":"ContainerDied","Data":"04b679bf0479c79c4fb9c19a7e81a8f1dc35a6e8ad6cf96a21e796fa413cb978"} Jan 28 15:30:57 crc kubenswrapper[4893]: I0128 15:30:57.844447 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 28 15:30:57 crc kubenswrapper[4893]: I0128 15:30:57.844725 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podUID="4802233c-9a5c-4074-b1ae-df434de29109" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://74d304226fd75a4afafe105c8fa169b4b6acdfdc99f465f2d566223b8af4b04b" gracePeriod=30 Jan 28 15:30:57 crc kubenswrapper[4893]: I0128 15:30:57.859531 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 28 15:30:57 crc kubenswrapper[4893]: I0128 15:30:57.859814 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podUID="e0b48ebb-b212-4370-aee1-db0d64b7a446" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934" gracePeriod=30 Jan 28 15:30:58 crc kubenswrapper[4893]: I0128 15:30:58.669452 4893 generic.go:334] "Generic (PLEG): container finished" podID="747647df-2218-4f3a-a1f4-132b662282ac" containerID="1426fde4640919ecaf8ccf0dab3e2fb29e0780a44171888b713852ed766e1baf" exitCode=143 Jan 28 15:30:58 crc kubenswrapper[4893]: I0128 15:30:58.670192 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"747647df-2218-4f3a-a1f4-132b662282ac","Type":"ContainerDied","Data":"1426fde4640919ecaf8ccf0dab3e2fb29e0780a44171888b713852ed766e1baf"} Jan 28 15:30:59 crc kubenswrapper[4893]: E0128 15:30:59.503743 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:30:59 crc kubenswrapper[4893]: E0128 15:30:59.505641 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:30:59 crc kubenswrapper[4893]: E0128 15:30:59.507236 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:30:59 crc kubenswrapper[4893]: E0128 15:30:59.507271 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podUID="e0b48ebb-b212-4370-aee1-db0d64b7a446" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:30:59 crc kubenswrapper[4893]: E0128 15:30:59.517689 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="74d304226fd75a4afafe105c8fa169b4b6acdfdc99f465f2d566223b8af4b04b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:30:59 crc kubenswrapper[4893]: E0128 15:30:59.518989 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="74d304226fd75a4afafe105c8fa169b4b6acdfdc99f465f2d566223b8af4b04b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:30:59 crc kubenswrapper[4893]: E0128 15:30:59.520887 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="74d304226fd75a4afafe105c8fa169b4b6acdfdc99f465f2d566223b8af4b04b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:30:59 crc kubenswrapper[4893]: E0128 15:30:59.520949 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podUID="4802233c-9a5c-4074-b1ae-df434de29109" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.244843 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.360002 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp62d\" (UniqueName: \"kubernetes.io/projected/747647df-2218-4f3a-a1f4-132b662282ac-kube-api-access-kp62d\") pod \"747647df-2218-4f3a-a1f4-132b662282ac\" (UID: \"747647df-2218-4f3a-a1f4-132b662282ac\") " Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.360345 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747647df-2218-4f3a-a1f4-132b662282ac-config-data\") pod \"747647df-2218-4f3a-a1f4-132b662282ac\" (UID: \"747647df-2218-4f3a-a1f4-132b662282ac\") " Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.360596 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/747647df-2218-4f3a-a1f4-132b662282ac-logs\") pod \"747647df-2218-4f3a-a1f4-132b662282ac\" (UID: \"747647df-2218-4f3a-a1f4-132b662282ac\") " Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.361342 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/747647df-2218-4f3a-a1f4-132b662282ac-logs" (OuterVolumeSpecName: "logs") pod "747647df-2218-4f3a-a1f4-132b662282ac" (UID: "747647df-2218-4f3a-a1f4-132b662282ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.365338 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747647df-2218-4f3a-a1f4-132b662282ac-kube-api-access-kp62d" (OuterVolumeSpecName: "kube-api-access-kp62d") pod "747647df-2218-4f3a-a1f4-132b662282ac" (UID: "747647df-2218-4f3a-a1f4-132b662282ac"). InnerVolumeSpecName "kube-api-access-kp62d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.365396 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.399016 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747647df-2218-4f3a-a1f4-132b662282ac-config-data" (OuterVolumeSpecName: "config-data") pod "747647df-2218-4f3a-a1f4-132b662282ac" (UID: "747647df-2218-4f3a-a1f4-132b662282ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.461760 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzfzr\" (UniqueName: \"kubernetes.io/projected/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-kube-api-access-wzfzr\") pod \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\" (UID: \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\") " Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.461823 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-config-data\") pod \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\" (UID: \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\") " Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.461992 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-logs\") pod \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\" (UID: \"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224\") " Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.462527 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/747647df-2218-4f3a-a1f4-132b662282ac-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.462550 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp62d\" (UniqueName: \"kubernetes.io/projected/747647df-2218-4f3a-a1f4-132b662282ac-kube-api-access-kp62d\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.462562 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747647df-2218-4f3a-a1f4-132b662282ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.462920 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-logs" (OuterVolumeSpecName: "logs") pod "7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" (UID: "7dab9ea9-5d9d-436d-bbfe-4dae18a6e224"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.464792 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-kube-api-access-wzfzr" (OuterVolumeSpecName: "kube-api-access-wzfzr") pod "7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" (UID: "7dab9ea9-5d9d-436d-bbfe-4dae18a6e224"). InnerVolumeSpecName "kube-api-access-wzfzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.485124 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-config-data" (OuterVolumeSpecName: "config-data") pod "7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" (UID: "7dab9ea9-5d9d-436d-bbfe-4dae18a6e224"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.563964 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzfzr\" (UniqueName: \"kubernetes.io/projected/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-kube-api-access-wzfzr\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.564023 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.564039 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.700226 4893 generic.go:334] "Generic (PLEG): container finished" podID="747647df-2218-4f3a-a1f4-132b662282ac" containerID="fb85e94d3f0b9e99f32ab547b316fe7b14954907a33b110aadebc972e7e16158" exitCode=0 Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.700272 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.700319 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"747647df-2218-4f3a-a1f4-132b662282ac","Type":"ContainerDied","Data":"fb85e94d3f0b9e99f32ab547b316fe7b14954907a33b110aadebc972e7e16158"} Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.700375 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"747647df-2218-4f3a-a1f4-132b662282ac","Type":"ContainerDied","Data":"9c2db18a294b2df4bbee053880f8fdf26ad3db38bdd12b9a5729e469b0f1e2f0"} Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.700393 4893 scope.go:117] "RemoveContainer" containerID="fb85e94d3f0b9e99f32ab547b316fe7b14954907a33b110aadebc972e7e16158" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.703377 4893 generic.go:334] "Generic (PLEG): container finished" podID="7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" containerID="08fd1ab72835c2c50055716eb6cea06b76157b3f5c4e049073f2c8d2c0ffaa84" exitCode=0 Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.703420 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224","Type":"ContainerDied","Data":"08fd1ab72835c2c50055716eb6cea06b76157b3f5c4e049073f2c8d2c0ffaa84"} Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.703450 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"7dab9ea9-5d9d-436d-bbfe-4dae18a6e224","Type":"ContainerDied","Data":"1bea29233d0b8710c90b84d31692a1ddf71558908334349210428f386953073a"} Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.703520 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.741234 4893 scope.go:117] "RemoveContainer" containerID="1426fde4640919ecaf8ccf0dab3e2fb29e0780a44171888b713852ed766e1baf" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.758411 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.768500 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.774695 4893 scope.go:117] "RemoveContainer" containerID="fb85e94d3f0b9e99f32ab547b316fe7b14954907a33b110aadebc972e7e16158" Jan 28 15:31:01 crc kubenswrapper[4893]: E0128 15:31:01.775136 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb85e94d3f0b9e99f32ab547b316fe7b14954907a33b110aadebc972e7e16158\": container with ID starting with fb85e94d3f0b9e99f32ab547b316fe7b14954907a33b110aadebc972e7e16158 not found: ID does not exist" containerID="fb85e94d3f0b9e99f32ab547b316fe7b14954907a33b110aadebc972e7e16158" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.775183 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb85e94d3f0b9e99f32ab547b316fe7b14954907a33b110aadebc972e7e16158"} err="failed to get container status \"fb85e94d3f0b9e99f32ab547b316fe7b14954907a33b110aadebc972e7e16158\": rpc error: code = NotFound desc = could not find container \"fb85e94d3f0b9e99f32ab547b316fe7b14954907a33b110aadebc972e7e16158\": container with ID starting with fb85e94d3f0b9e99f32ab547b316fe7b14954907a33b110aadebc972e7e16158 not found: ID does not exist" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.775208 4893 scope.go:117] "RemoveContainer" containerID="1426fde4640919ecaf8ccf0dab3e2fb29e0780a44171888b713852ed766e1baf" Jan 28 15:31:01 crc kubenswrapper[4893]: E0128 15:31:01.775721 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1426fde4640919ecaf8ccf0dab3e2fb29e0780a44171888b713852ed766e1baf\": container with ID starting with 1426fde4640919ecaf8ccf0dab3e2fb29e0780a44171888b713852ed766e1baf not found: ID does not exist" containerID="1426fde4640919ecaf8ccf0dab3e2fb29e0780a44171888b713852ed766e1baf" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.775783 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1426fde4640919ecaf8ccf0dab3e2fb29e0780a44171888b713852ed766e1baf"} err="failed to get container status \"1426fde4640919ecaf8ccf0dab3e2fb29e0780a44171888b713852ed766e1baf\": rpc error: code = NotFound desc = could not find container \"1426fde4640919ecaf8ccf0dab3e2fb29e0780a44171888b713852ed766e1baf\": container with ID starting with 1426fde4640919ecaf8ccf0dab3e2fb29e0780a44171888b713852ed766e1baf not found: ID does not exist" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.775818 4893 scope.go:117] "RemoveContainer" containerID="08fd1ab72835c2c50055716eb6cea06b76157b3f5c4e049073f2c8d2c0ffaa84" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.777176 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.787524 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.798919 4893 scope.go:117] "RemoveContainer" containerID="04b679bf0479c79c4fb9c19a7e81a8f1dc35a6e8ad6cf96a21e796fa413cb978" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.859387 4893 scope.go:117] "RemoveContainer" containerID="08fd1ab72835c2c50055716eb6cea06b76157b3f5c4e049073f2c8d2c0ffaa84" Jan 28 15:31:01 crc kubenswrapper[4893]: E0128 15:31:01.859903 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08fd1ab72835c2c50055716eb6cea06b76157b3f5c4e049073f2c8d2c0ffaa84\": container with ID starting with 08fd1ab72835c2c50055716eb6cea06b76157b3f5c4e049073f2c8d2c0ffaa84 not found: ID does not exist" containerID="08fd1ab72835c2c50055716eb6cea06b76157b3f5c4e049073f2c8d2c0ffaa84" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.859956 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08fd1ab72835c2c50055716eb6cea06b76157b3f5c4e049073f2c8d2c0ffaa84"} err="failed to get container status \"08fd1ab72835c2c50055716eb6cea06b76157b3f5c4e049073f2c8d2c0ffaa84\": rpc error: code = NotFound desc = could not find container \"08fd1ab72835c2c50055716eb6cea06b76157b3f5c4e049073f2c8d2c0ffaa84\": container with ID starting with 08fd1ab72835c2c50055716eb6cea06b76157b3f5c4e049073f2c8d2c0ffaa84 not found: ID does not exist" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.859995 4893 scope.go:117] "RemoveContainer" containerID="04b679bf0479c79c4fb9c19a7e81a8f1dc35a6e8ad6cf96a21e796fa413cb978" Jan 28 15:31:01 crc kubenswrapper[4893]: E0128 15:31:01.860675 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b679bf0479c79c4fb9c19a7e81a8f1dc35a6e8ad6cf96a21e796fa413cb978\": container with ID starting with 04b679bf0479c79c4fb9c19a7e81a8f1dc35a6e8ad6cf96a21e796fa413cb978 not found: ID does not exist" containerID="04b679bf0479c79c4fb9c19a7e81a8f1dc35a6e8ad6cf96a21e796fa413cb978" Jan 28 15:31:01 crc kubenswrapper[4893]: I0128 15:31:01.860734 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b679bf0479c79c4fb9c19a7e81a8f1dc35a6e8ad6cf96a21e796fa413cb978"} err="failed to get container status \"04b679bf0479c79c4fb9c19a7e81a8f1dc35a6e8ad6cf96a21e796fa413cb978\": rpc error: code = NotFound desc = could not find container \"04b679bf0479c79c4fb9c19a7e81a8f1dc35a6e8ad6cf96a21e796fa413cb978\": container with ID starting with 04b679bf0479c79c4fb9c19a7e81a8f1dc35a6e8ad6cf96a21e796fa413cb978 not found: ID does not exist" Jan 28 15:31:02 crc kubenswrapper[4893]: I0128 15:31:02.909524 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="747647df-2218-4f3a-a1f4-132b662282ac" path="/var/lib/kubelet/pods/747647df-2218-4f3a-a1f4-132b662282ac/volumes" Jan 28 15:31:02 crc kubenswrapper[4893]: I0128 15:31:02.912102 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" path="/var/lib/kubelet/pods/7dab9ea9-5d9d-436d-bbfe-4dae18a6e224/volumes" Jan 28 15:31:03 crc kubenswrapper[4893]: I0128 15:31:03.783220 4893 generic.go:334] "Generic (PLEG): container finished" podID="4802233c-9a5c-4074-b1ae-df434de29109" containerID="74d304226fd75a4afafe105c8fa169b4b6acdfdc99f465f2d566223b8af4b04b" exitCode=0 Jan 28 15:31:03 crc kubenswrapper[4893]: I0128 15:31:03.783272 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"4802233c-9a5c-4074-b1ae-df434de29109","Type":"ContainerDied","Data":"74d304226fd75a4afafe105c8fa169b4b6acdfdc99f465f2d566223b8af4b04b"} Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.046029 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.123568 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.125510 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k45hk\" (UniqueName: \"kubernetes.io/projected/4802233c-9a5c-4074-b1ae-df434de29109-kube-api-access-k45hk\") pod \"4802233c-9a5c-4074-b1ae-df434de29109\" (UID: \"4802233c-9a5c-4074-b1ae-df434de29109\") " Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.125636 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4802233c-9a5c-4074-b1ae-df434de29109-config-data\") pod \"4802233c-9a5c-4074-b1ae-df434de29109\" (UID: \"4802233c-9a5c-4074-b1ae-df434de29109\") " Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.137835 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4802233c-9a5c-4074-b1ae-df434de29109-kube-api-access-k45hk" (OuterVolumeSpecName: "kube-api-access-k45hk") pod "4802233c-9a5c-4074-b1ae-df434de29109" (UID: "4802233c-9a5c-4074-b1ae-df434de29109"). InnerVolumeSpecName "kube-api-access-k45hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.157776 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4802233c-9a5c-4074-b1ae-df434de29109-config-data" (OuterVolumeSpecName: "config-data") pod "4802233c-9a5c-4074-b1ae-df434de29109" (UID: "4802233c-9a5c-4074-b1ae-df434de29109"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.227614 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b48ebb-b212-4370-aee1-db0d64b7a446-config-data\") pod \"e0b48ebb-b212-4370-aee1-db0d64b7a446\" (UID: \"e0b48ebb-b212-4370-aee1-db0d64b7a446\") " Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.227856 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trld7\" (UniqueName: \"kubernetes.io/projected/e0b48ebb-b212-4370-aee1-db0d64b7a446-kube-api-access-trld7\") pod \"e0b48ebb-b212-4370-aee1-db0d64b7a446\" (UID: \"e0b48ebb-b212-4370-aee1-db0d64b7a446\") " Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.228221 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4802233c-9a5c-4074-b1ae-df434de29109-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.228241 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k45hk\" (UniqueName: \"kubernetes.io/projected/4802233c-9a5c-4074-b1ae-df434de29109-kube-api-access-k45hk\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.245670 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b48ebb-b212-4370-aee1-db0d64b7a446-kube-api-access-trld7" (OuterVolumeSpecName: "kube-api-access-trld7") pod "e0b48ebb-b212-4370-aee1-db0d64b7a446" (UID: "e0b48ebb-b212-4370-aee1-db0d64b7a446"). InnerVolumeSpecName "kube-api-access-trld7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.251501 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b48ebb-b212-4370-aee1-db0d64b7a446-config-data" (OuterVolumeSpecName: "config-data") pod "e0b48ebb-b212-4370-aee1-db0d64b7a446" (UID: "e0b48ebb-b212-4370-aee1-db0d64b7a446"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.330259 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trld7\" (UniqueName: \"kubernetes.io/projected/e0b48ebb-b212-4370-aee1-db0d64b7a446-kube-api-access-trld7\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.330303 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0b48ebb-b212-4370-aee1-db0d64b7a446-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.799460 4893 generic.go:334] "Generic (PLEG): container finished" podID="e0b48ebb-b212-4370-aee1-db0d64b7a446" containerID="d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934" exitCode=0 Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.799565 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"e0b48ebb-b212-4370-aee1-db0d64b7a446","Type":"ContainerDied","Data":"d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934"} Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.799599 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"e0b48ebb-b212-4370-aee1-db0d64b7a446","Type":"ContainerDied","Data":"3477a0ae4b85374e8799c93282859e7d93db41810da20c4e831dac870ea39e8e"} Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.799619 4893 scope.go:117] "RemoveContainer" containerID="d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.799739 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.829306 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"4802233c-9a5c-4074-b1ae-df434de29109","Type":"ContainerDied","Data":"aa982390bba057ac9ef2eeb432a774c43afc693db5b8fe9d613cfdf16e0210a2"} Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.829356 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.855585 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.871056 4893 scope.go:117] "RemoveContainer" containerID="d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934" Jan 28 15:31:04 crc kubenswrapper[4893]: E0128 15:31:04.876757 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934\": container with ID starting with d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934 not found: ID does not exist" containerID="d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.876823 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934"} err="failed to get container status \"d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934\": rpc error: code = NotFound desc = could not find container \"d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934\": container with ID starting with d6c6f246137013566386010e74c0300a8f5af6ff1b6f813ceb0e2b555cb54934 not found: ID does not exist" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.876859 4893 scope.go:117] "RemoveContainer" containerID="74d304226fd75a4afafe105c8fa169b4b6acdfdc99f465f2d566223b8af4b04b" Jan 28 15:31:04 crc kubenswrapper[4893]: I0128 15:31:04.880928 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:04.991103 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0b48ebb-b212-4370-aee1-db0d64b7a446" path="/var/lib/kubelet/pods/e0b48ebb-b212-4370-aee1-db0d64b7a446/volumes" Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.002060 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.033230 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.136318 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.136580 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podUID="d90cbda0-f31c-4d68-8a5d-288d1e651c62" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4" gracePeriod=30 Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.164564 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.164834 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podUID="8bd2f429-3780-4f91-8eeb-fab736e0ff82" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d" gracePeriod=30 Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.216938 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.217614 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="b747a8ff-f3ba-438c-84a5-2175ace99287" containerName="nova-kuttl-metadata-log" containerID="cri-o://9deb24d2a03765eb443d2d00b347f94bedb035551d2cd92297fd642e804f79a2" gracePeriod=30 Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.217725 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="b747a8ff-f3ba-438c-84a5-2175ace99287" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://abae1f18a2baaf05ac990b2a23424bc8e1f5ffe2c192a429562cafb9405b5e86" gracePeriod=30 Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.235203 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.235495 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" containerName="nova-kuttl-metadata-log" containerID="cri-o://4e56d304cd1b510fe02ccca7c135cc7019afafa6cb02dd0f61b3c6406cf1320d" gracePeriod=30 Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.235603 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://a624bf19f62397cc4382015857d1007ed36b83c5e86bb8f1e88e57e8a4a7396b" gracePeriod=30 Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.665906 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.666597 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podUID="e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://5b63da1cde1ef3fdafcac07f15ba29288fe9f299f96d3a3211aaf65902f07096" gracePeriod=30 Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.680838 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.681490 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podUID="03ddcf88-0be9-47cb-9301-0618b47d28f6" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed" gracePeriod=30 Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.841879 4893 generic.go:334] "Generic (PLEG): container finished" podID="b747a8ff-f3ba-438c-84a5-2175ace99287" containerID="9deb24d2a03765eb443d2d00b347f94bedb035551d2cd92297fd642e804f79a2" exitCode=143 Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.841957 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"b747a8ff-f3ba-438c-84a5-2175ace99287","Type":"ContainerDied","Data":"9deb24d2a03765eb443d2d00b347f94bedb035551d2cd92297fd642e804f79a2"} Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.844129 4893 generic.go:334] "Generic (PLEG): container finished" podID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" containerID="4e56d304cd1b510fe02ccca7c135cc7019afafa6cb02dd0f61b3c6406cf1320d" exitCode=143 Jan 28 15:31:05 crc kubenswrapper[4893]: I0128 15:31:05.844190 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"c2a91c2c-ade3-4dd7-983a-49eda7bf545b","Type":"ContainerDied","Data":"4e56d304cd1b510fe02ccca7c135cc7019afafa6cb02dd0f61b3c6406cf1320d"} Jan 28 15:31:06 crc kubenswrapper[4893]: E0128 15:31:06.269289 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:31:06 crc kubenswrapper[4893]: E0128 15:31:06.276623 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:31:06 crc kubenswrapper[4893]: E0128 15:31:06.278222 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:31:06 crc kubenswrapper[4893]: E0128 15:31:06.278295 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podUID="d90cbda0-f31c-4d68-8a5d-288d1e651c62" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:31:06 crc kubenswrapper[4893]: E0128 15:31:06.304440 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:31:06 crc kubenswrapper[4893]: E0128 15:31:06.306932 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:31:06 crc kubenswrapper[4893]: E0128 15:31:06.308350 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:31:06 crc kubenswrapper[4893]: E0128 15:31:06.308550 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podUID="8bd2f429-3780-4f91-8eeb-fab736e0ff82" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:31:06 crc kubenswrapper[4893]: I0128 15:31:06.904294 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4802233c-9a5c-4074-b1ae-df434de29109" path="/var/lib/kubelet/pods/4802233c-9a5c-4074-b1ae-df434de29109/volumes" Jan 28 15:31:07 crc kubenswrapper[4893]: I0128 15:31:07.891907 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:31:07 crc kubenswrapper[4893]: E0128 15:31:07.892459 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:31:07 crc kubenswrapper[4893]: E0128 15:31:07.977525 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b63da1cde1ef3fdafcac07f15ba29288fe9f299f96d3a3211aaf65902f07096" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:31:07 crc kubenswrapper[4893]: E0128 15:31:07.982342 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b63da1cde1ef3fdafcac07f15ba29288fe9f299f96d3a3211aaf65902f07096" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:31:07 crc kubenswrapper[4893]: E0128 15:31:07.983789 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5b63da1cde1ef3fdafcac07f15ba29288fe9f299f96d3a3211aaf65902f07096" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:31:07 crc kubenswrapper[4893]: E0128 15:31:07.983861 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podUID="e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:31:08 crc kubenswrapper[4893]: E0128 15:31:08.006872 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:31:08 crc kubenswrapper[4893]: E0128 15:31:08.008698 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:31:08 crc kubenswrapper[4893]: E0128 15:31:08.010057 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:31:08 crc kubenswrapper[4893]: E0128 15:31:08.010110 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podUID="03ddcf88-0be9-47cb-9301-0618b47d28f6" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:31:08 crc kubenswrapper[4893]: I0128 15:31:08.389765 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="b747a8ff-f3ba-438c-84a5-2175ace99287" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.173:8775/\": read tcp 10.217.0.2:58236->10.217.0.173:8775: read: connection reset by peer" Jan 28 15:31:08 crc kubenswrapper[4893]: I0128 15:31:08.390446 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="b747a8ff-f3ba-438c-84a5-2175ace99287" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.173:8775/\": read tcp 10.217.0.2:58250->10.217.0.173:8775: read: connection reset by peer" Jan 28 15:31:08 crc kubenswrapper[4893]: I0128 15:31:08.395221 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.174:8775/\": read tcp 10.217.0.2:36450->10.217.0.174:8775: read: connection reset by peer" Jan 28 15:31:08 crc kubenswrapper[4893]: I0128 15:31:08.395218 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.174:8775/\": read tcp 10.217.0.2:36452->10.217.0.174:8775: read: connection reset by peer" Jan 28 15:31:08 crc kubenswrapper[4893]: I0128 15:31:08.877797 4893 generic.go:334] "Generic (PLEG): container finished" podID="b747a8ff-f3ba-438c-84a5-2175ace99287" containerID="abae1f18a2baaf05ac990b2a23424bc8e1f5ffe2c192a429562cafb9405b5e86" exitCode=0 Jan 28 15:31:08 crc kubenswrapper[4893]: I0128 15:31:08.877885 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"b747a8ff-f3ba-438c-84a5-2175ace99287","Type":"ContainerDied","Data":"abae1f18a2baaf05ac990b2a23424bc8e1f5ffe2c192a429562cafb9405b5e86"} Jan 28 15:31:08 crc kubenswrapper[4893]: I0128 15:31:08.878307 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"b747a8ff-f3ba-438c-84a5-2175ace99287","Type":"ContainerDied","Data":"2a883dbbfd799785d65c97e25284f93b7b3e41c4de53a14f38bd854bedae3dc3"} Jan 28 15:31:08 crc kubenswrapper[4893]: I0128 15:31:08.878333 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a883dbbfd799785d65c97e25284f93b7b3e41c4de53a14f38bd854bedae3dc3" Jan 28 15:31:08 crc kubenswrapper[4893]: I0128 15:31:08.880298 4893 generic.go:334] "Generic (PLEG): container finished" podID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" containerID="a624bf19f62397cc4382015857d1007ed36b83c5e86bb8f1e88e57e8a4a7396b" exitCode=0 Jan 28 15:31:08 crc kubenswrapper[4893]: I0128 15:31:08.880339 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"c2a91c2c-ade3-4dd7-983a-49eda7bf545b","Type":"ContainerDied","Data":"a624bf19f62397cc4382015857d1007ed36b83c5e86bb8f1e88e57e8a4a7396b"} Jan 28 15:31:08 crc kubenswrapper[4893]: I0128 15:31:08.880365 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"c2a91c2c-ade3-4dd7-983a-49eda7bf545b","Type":"ContainerDied","Data":"8aff4536873b55938df332c7ff24d99fbf8ab71dbc21d0c6acce3716349e77ab"} Jan 28 15:31:08 crc kubenswrapper[4893]: I0128 15:31:08.880379 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aff4536873b55938df332c7ff24d99fbf8ab71dbc21d0c6acce3716349e77ab" Jan 28 15:31:08 crc kubenswrapper[4893]: I0128 15:31:08.914024 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:31:08 crc kubenswrapper[4893]: I0128 15:31:08.923413 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.059627 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b747a8ff-f3ba-438c-84a5-2175ace99287-config-data\") pod \"b747a8ff-f3ba-438c-84a5-2175ace99287\" (UID: \"b747a8ff-f3ba-438c-84a5-2175ace99287\") " Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.059723 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx2rp\" (UniqueName: \"kubernetes.io/projected/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-kube-api-access-sx2rp\") pod \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\" (UID: \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\") " Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.059800 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg5rr\" (UniqueName: \"kubernetes.io/projected/b747a8ff-f3ba-438c-84a5-2175ace99287-kube-api-access-tg5rr\") pod \"b747a8ff-f3ba-438c-84a5-2175ace99287\" (UID: \"b747a8ff-f3ba-438c-84a5-2175ace99287\") " Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.059839 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-config-data\") pod \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\" (UID: \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\") " Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.059870 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-logs\") pod \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\" (UID: \"c2a91c2c-ade3-4dd7-983a-49eda7bf545b\") " Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.059925 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b747a8ff-f3ba-438c-84a5-2175ace99287-logs\") pod \"b747a8ff-f3ba-438c-84a5-2175ace99287\" (UID: \"b747a8ff-f3ba-438c-84a5-2175ace99287\") " Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.060833 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b747a8ff-f3ba-438c-84a5-2175ace99287-logs" (OuterVolumeSpecName: "logs") pod "b747a8ff-f3ba-438c-84a5-2175ace99287" (UID: "b747a8ff-f3ba-438c-84a5-2175ace99287"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.061037 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-logs" (OuterVolumeSpecName: "logs") pod "c2a91c2c-ade3-4dd7-983a-49eda7bf545b" (UID: "c2a91c2c-ade3-4dd7-983a-49eda7bf545b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.066825 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b747a8ff-f3ba-438c-84a5-2175ace99287-kube-api-access-tg5rr" (OuterVolumeSpecName: "kube-api-access-tg5rr") pod "b747a8ff-f3ba-438c-84a5-2175ace99287" (UID: "b747a8ff-f3ba-438c-84a5-2175ace99287"). InnerVolumeSpecName "kube-api-access-tg5rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.068621 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-kube-api-access-sx2rp" (OuterVolumeSpecName: "kube-api-access-sx2rp") pod "c2a91c2c-ade3-4dd7-983a-49eda7bf545b" (UID: "c2a91c2c-ade3-4dd7-983a-49eda7bf545b"). InnerVolumeSpecName "kube-api-access-sx2rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.098763 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b747a8ff-f3ba-438c-84a5-2175ace99287-config-data" (OuterVolumeSpecName: "config-data") pod "b747a8ff-f3ba-438c-84a5-2175ace99287" (UID: "b747a8ff-f3ba-438c-84a5-2175ace99287"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.099757 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-config-data" (OuterVolumeSpecName: "config-data") pod "c2a91c2c-ade3-4dd7-983a-49eda7bf545b" (UID: "c2a91c2c-ade3-4dd7-983a-49eda7bf545b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.163052 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b747a8ff-f3ba-438c-84a5-2175ace99287-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.163110 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b747a8ff-f3ba-438c-84a5-2175ace99287-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.163126 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx2rp\" (UniqueName: \"kubernetes.io/projected/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-kube-api-access-sx2rp\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.163145 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg5rr\" (UniqueName: \"kubernetes.io/projected/b747a8ff-f3ba-438c-84a5-2175ace99287-kube-api-access-tg5rr\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.163158 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.163170 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2a91c2c-ade3-4dd7-983a-49eda7bf545b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.550889 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.677671 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnnp9\" (UniqueName: \"kubernetes.io/projected/d90cbda0-f31c-4d68-8a5d-288d1e651c62-kube-api-access-tnnp9\") pod \"d90cbda0-f31c-4d68-8a5d-288d1e651c62\" (UID: \"d90cbda0-f31c-4d68-8a5d-288d1e651c62\") " Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.677917 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d90cbda0-f31c-4d68-8a5d-288d1e651c62-config-data\") pod \"d90cbda0-f31c-4d68-8a5d-288d1e651c62\" (UID: \"d90cbda0-f31c-4d68-8a5d-288d1e651c62\") " Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.693076 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d90cbda0-f31c-4d68-8a5d-288d1e651c62-kube-api-access-tnnp9" (OuterVolumeSpecName: "kube-api-access-tnnp9") pod "d90cbda0-f31c-4d68-8a5d-288d1e651c62" (UID: "d90cbda0-f31c-4d68-8a5d-288d1e651c62"). InnerVolumeSpecName "kube-api-access-tnnp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.717210 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d90cbda0-f31c-4d68-8a5d-288d1e651c62-config-data" (OuterVolumeSpecName: "config-data") pod "d90cbda0-f31c-4d68-8a5d-288d1e651c62" (UID: "d90cbda0-f31c-4d68-8a5d-288d1e651c62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.781103 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnnp9\" (UniqueName: \"kubernetes.io/projected/d90cbda0-f31c-4d68-8a5d-288d1e651c62-kube-api-access-tnnp9\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.781157 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d90cbda0-f31c-4d68-8a5d-288d1e651c62-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.859565 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.902158 4893 generic.go:334] "Generic (PLEG): container finished" podID="8bd2f429-3780-4f91-8eeb-fab736e0ff82" containerID="1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d" exitCode=0 Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.902265 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"8bd2f429-3780-4f91-8eeb-fab736e0ff82","Type":"ContainerDied","Data":"1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d"} Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.902293 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.902318 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"8bd2f429-3780-4f91-8eeb-fab736e0ff82","Type":"ContainerDied","Data":"3850966a8f8588555cd16a2710ab50e90380400ef7183c6e5f3973a1695c0833"} Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.902342 4893 scope.go:117] "RemoveContainer" containerID="1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.905806 4893 generic.go:334] "Generic (PLEG): container finished" podID="d90cbda0-f31c-4d68-8a5d-288d1e651c62" containerID="fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4" exitCode=0 Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.905896 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.905925 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"d90cbda0-f31c-4d68-8a5d-288d1e651c62","Type":"ContainerDied","Data":"fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4"} Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.905959 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.905975 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"d90cbda0-f31c-4d68-8a5d-288d1e651c62","Type":"ContainerDied","Data":"a662ee0e42db5acb617ca42d9ff0b260d664bc32ae9f3b1c4d39a94103ca1259"} Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.905905 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.950511 4893 scope.go:117] "RemoveContainer" containerID="1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.950760 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 28 15:31:09 crc kubenswrapper[4893]: E0128 15:31:09.951271 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d\": container with ID starting with 1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d not found: ID does not exist" containerID="1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.951323 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d"} err="failed to get container status \"1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d\": rpc error: code = NotFound desc = could not find container \"1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d\": container with ID starting with 1e99e30acefad6953c93639bc48612e302db94a8c25087681cad617a14f06d9d not found: ID does not exist" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.951360 4893 scope.go:117] "RemoveContainer" containerID="fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.974679 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.983313 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.990644 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v984h\" (UniqueName: \"kubernetes.io/projected/8bd2f429-3780-4f91-8eeb-fab736e0ff82-kube-api-access-v984h\") pod \"8bd2f429-3780-4f91-8eeb-fab736e0ff82\" (UID: \"8bd2f429-3780-4f91-8eeb-fab736e0ff82\") " Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.990714 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd2f429-3780-4f91-8eeb-fab736e0ff82-config-data\") pod \"8bd2f429-3780-4f91-8eeb-fab736e0ff82\" (UID: \"8bd2f429-3780-4f91-8eeb-fab736e0ff82\") " Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.994928 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.998420 4893 scope.go:117] "RemoveContainer" containerID="fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.998678 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bd2f429-3780-4f91-8eeb-fab736e0ff82-kube-api-access-v984h" (OuterVolumeSpecName: "kube-api-access-v984h") pod "8bd2f429-3780-4f91-8eeb-fab736e0ff82" (UID: "8bd2f429-3780-4f91-8eeb-fab736e0ff82"). InnerVolumeSpecName "kube-api-access-v984h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:09 crc kubenswrapper[4893]: E0128 15:31:09.999801 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4\": container with ID starting with fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4 not found: ID does not exist" containerID="fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4" Jan 28 15:31:09 crc kubenswrapper[4893]: I0128 15:31:09.999870 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4"} err="failed to get container status \"fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4\": rpc error: code = NotFound desc = could not find container \"fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4\": container with ID starting with fd88d5dead44e2ebc343335c13b982350c8e98a5e3a16155ce6e1e8014da35f4 not found: ID does not exist" Jan 28 15:31:10 crc kubenswrapper[4893]: I0128 15:31:10.008854 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 28 15:31:10 crc kubenswrapper[4893]: I0128 15:31:10.015651 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 28 15:31:10 crc kubenswrapper[4893]: I0128 15:31:10.017781 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bd2f429-3780-4f91-8eeb-fab736e0ff82-config-data" (OuterVolumeSpecName: "config-data") pod "8bd2f429-3780-4f91-8eeb-fab736e0ff82" (UID: "8bd2f429-3780-4f91-8eeb-fab736e0ff82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:10 crc kubenswrapper[4893]: I0128 15:31:10.093221 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v984h\" (UniqueName: \"kubernetes.io/projected/8bd2f429-3780-4f91-8eeb-fab736e0ff82-kube-api-access-v984h\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:10 crc kubenswrapper[4893]: I0128 15:31:10.093254 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd2f429-3780-4f91-8eeb-fab736e0ff82-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:10 crc kubenswrapper[4893]: I0128 15:31:10.232654 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 28 15:31:10 crc kubenswrapper[4893]: I0128 15:31:10.248297 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 28 15:31:10 crc kubenswrapper[4893]: I0128 15:31:10.905947 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bd2f429-3780-4f91-8eeb-fab736e0ff82" path="/var/lib/kubelet/pods/8bd2f429-3780-4f91-8eeb-fab736e0ff82/volumes" Jan 28 15:31:10 crc kubenswrapper[4893]: I0128 15:31:10.907109 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b747a8ff-f3ba-438c-84a5-2175ace99287" path="/var/lib/kubelet/pods/b747a8ff-f3ba-438c-84a5-2175ace99287/volumes" Jan 28 15:31:10 crc kubenswrapper[4893]: I0128 15:31:10.915863 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" path="/var/lib/kubelet/pods/c2a91c2c-ade3-4dd7-983a-49eda7bf545b/volumes" Jan 28 15:31:10 crc kubenswrapper[4893]: I0128 15:31:10.916659 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d90cbda0-f31c-4d68-8a5d-288d1e651c62" path="/var/lib/kubelet/pods/d90cbda0-f31c-4d68-8a5d-288d1e651c62/volumes" Jan 28 15:31:10 crc kubenswrapper[4893]: I0128 15:31:10.918014 4893 generic.go:334] "Generic (PLEG): container finished" podID="e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6" containerID="5b63da1cde1ef3fdafcac07f15ba29288fe9f299f96d3a3211aaf65902f07096" exitCode=0 Jan 28 15:31:10 crc kubenswrapper[4893]: I0128 15:31:10.919803 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6","Type":"ContainerDied","Data":"5b63da1cde1ef3fdafcac07f15ba29288fe9f299f96d3a3211aaf65902f07096"} Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.075495 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.181283 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.212150 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm6vb\" (UniqueName: \"kubernetes.io/projected/e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6-kube-api-access-mm6vb\") pod \"e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6\" (UID: \"e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6\") " Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.212235 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6-config-data\") pod \"e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6\" (UID: \"e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6\") " Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.212344 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ddcf88-0be9-47cb-9301-0618b47d28f6-config-data\") pod \"03ddcf88-0be9-47cb-9301-0618b47d28f6\" (UID: \"03ddcf88-0be9-47cb-9301-0618b47d28f6\") " Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.217893 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6-kube-api-access-mm6vb" (OuterVolumeSpecName: "kube-api-access-mm6vb") pod "e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6" (UID: "e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6"). InnerVolumeSpecName "kube-api-access-mm6vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.236196 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6-config-data" (OuterVolumeSpecName: "config-data") pod "e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6" (UID: "e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.237273 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ddcf88-0be9-47cb-9301-0618b47d28f6-config-data" (OuterVolumeSpecName: "config-data") pod "03ddcf88-0be9-47cb-9301-0618b47d28f6" (UID: "03ddcf88-0be9-47cb-9301-0618b47d28f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.313239 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g26nr\" (UniqueName: \"kubernetes.io/projected/03ddcf88-0be9-47cb-9301-0618b47d28f6-kube-api-access-g26nr\") pod \"03ddcf88-0be9-47cb-9301-0618b47d28f6\" (UID: \"03ddcf88-0be9-47cb-9301-0618b47d28f6\") " Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.313623 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm6vb\" (UniqueName: \"kubernetes.io/projected/e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6-kube-api-access-mm6vb\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.313643 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.313654 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03ddcf88-0be9-47cb-9301-0618b47d28f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.316151 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ddcf88-0be9-47cb-9301-0618b47d28f6-kube-api-access-g26nr" (OuterVolumeSpecName: "kube-api-access-g26nr") pod "03ddcf88-0be9-47cb-9301-0618b47d28f6" (UID: "03ddcf88-0be9-47cb-9301-0618b47d28f6"). InnerVolumeSpecName "kube-api-access-g26nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.414824 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g26nr\" (UniqueName: \"kubernetes.io/projected/03ddcf88-0be9-47cb-9301-0618b47d28f6-kube-api-access-g26nr\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.931097 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6","Type":"ContainerDied","Data":"96621e7c15f530958ab0791578db6f1608d39221bacff989d097dc4a09352bbb"} Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.931823 4893 scope.go:117] "RemoveContainer" containerID="5b63da1cde1ef3fdafcac07f15ba29288fe9f299f96d3a3211aaf65902f07096" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.932038 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.935221 4893 generic.go:334] "Generic (PLEG): container finished" podID="03ddcf88-0be9-47cb-9301-0618b47d28f6" containerID="3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed" exitCode=0 Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.935287 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"03ddcf88-0be9-47cb-9301-0618b47d28f6","Type":"ContainerDied","Data":"3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed"} Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.935317 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.935327 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"03ddcf88-0be9-47cb-9301-0618b47d28f6","Type":"ContainerDied","Data":"e0718ac535b33f273fc0174379609b39342330c56baaf09544ae02b0baebbc8d"} Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.958787 4893 scope.go:117] "RemoveContainer" containerID="3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.974620 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.986187 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.992174 4893 scope.go:117] "RemoveContainer" containerID="3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed" Jan 28 15:31:11 crc kubenswrapper[4893]: E0128 15:31:11.992945 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed\": container with ID starting with 3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed not found: ID does not exist" containerID="3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.993005 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed"} err="failed to get container status \"3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed\": rpc error: code = NotFound desc = could not find container \"3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed\": container with ID starting with 3ee81afc9404f5f04746ad73ee4dfadd3720e2ef36b461b9bb10e7922bc358ed not found: ID does not exist" Jan 28 15:31:11 crc kubenswrapper[4893]: I0128 15:31:11.995653 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 28 15:31:12 crc kubenswrapper[4893]: I0128 15:31:12.002730 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 28 15:31:12 crc kubenswrapper[4893]: I0128 15:31:12.901622 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ddcf88-0be9-47cb-9301-0618b47d28f6" path="/var/lib/kubelet/pods/03ddcf88-0be9-47cb-9301-0618b47d28f6/volumes" Jan 28 15:31:12 crc kubenswrapper[4893]: I0128 15:31:12.902166 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6" path="/var/lib/kubelet/pods/e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6/volumes" Jan 28 15:31:16 crc kubenswrapper[4893]: I0128 15:31:16.058332 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bf54-account-create-update-zxqbm"] Jan 28 15:31:16 crc kubenswrapper[4893]: I0128 15:31:16.067609 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bf54-account-create-update-zxqbm"] Jan 28 15:31:16 crc kubenswrapper[4893]: I0128 15:31:16.902308 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4724e828-4305-4fdc-9bec-0af263e7eed9" path="/var/lib/kubelet/pods/4724e828-4305-4fdc-9bec-0af263e7eed9/volumes" Jan 28 15:31:17 crc kubenswrapper[4893]: I0128 15:31:17.031290 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-fbc8-account-create-update-5vsn8"] Jan 28 15:31:17 crc kubenswrapper[4893]: I0128 15:31:17.039770 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-create-fj4kz"] Jan 28 15:31:17 crc kubenswrapper[4893]: I0128 15:31:17.048366 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-create-kjd29"] Jan 28 15:31:17 crc kubenswrapper[4893]: I0128 15:31:17.059714 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-fbc8-account-create-update-5vsn8"] Jan 28 15:31:17 crc kubenswrapper[4893]: I0128 15:31:17.067314 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-create-fj4kz"] Jan 28 15:31:17 crc kubenswrapper[4893]: I0128 15:31:17.074911 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-create-kjd29"] Jan 28 15:31:18 crc kubenswrapper[4893]: I0128 15:31:18.900300 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="491fbaf4-ae4b-42f4-a505-70d34407e7ef" path="/var/lib/kubelet/pods/491fbaf4-ae4b-42f4-a505-70d34407e7ef/volumes" Jan 28 15:31:18 crc kubenswrapper[4893]: I0128 15:31:18.901017 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="663785fa-d819-4227-a09f-0a7d2b72e7fe" path="/var/lib/kubelet/pods/663785fa-d819-4227-a09f-0a7d2b72e7fe/volumes" Jan 28 15:31:18 crc kubenswrapper[4893]: I0128 15:31:18.901606 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2bcba5-5853-4f12-8bde-522e186d1839" path="/var/lib/kubelet/pods/ca2bcba5-5853-4f12-8bde-522e186d1839/volumes" Jan 28 15:31:21 crc kubenswrapper[4893]: I0128 15:31:21.892906 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:31:21 crc kubenswrapper[4893]: E0128 15:31:21.893795 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:31:26 crc kubenswrapper[4893]: I0128 15:31:26.041307 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-qxs4k"] Jan 28 15:31:26 crc kubenswrapper[4893]: I0128 15:31:26.048002 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-qxs4k"] Jan 28 15:31:26 crc kubenswrapper[4893]: I0128 15:31:26.152980 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:31:26 crc kubenswrapper[4893]: I0128 15:31:26.153268 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="0987bc27-2528-4462-bcd5-0941ec12bef4" containerName="nova-kuttl-api-log" containerID="cri-o://d5c3de7719f40d45a4d342b029b78df9d105f54ea909241cfe57f636e35ecf42" gracePeriod=30 Jan 28 15:31:26 crc kubenswrapper[4893]: I0128 15:31:26.153372 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="0987bc27-2528-4462-bcd5-0941ec12bef4" containerName="nova-kuttl-api-api" containerID="cri-o://a389eef2c3e0a96e4d12701462d49828608a25645a60d4ee7ee518881b050c1e" gracePeriod=30 Jan 28 15:31:26 crc kubenswrapper[4893]: I0128 15:31:26.731983 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:31:26 crc kubenswrapper[4893]: I0128 15:31:26.732580 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="87e0172a-6fe8-43d0-97ba-3ea57089d58d" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://ba76a74c5e15d214d9ff5666bdead74b3e974634b822003b81259ff501b75995" gracePeriod=30 Jan 28 15:31:26 crc kubenswrapper[4893]: I0128 15:31:26.900975 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="423c812c-5dbb-4719-9b76-c782c05ef6f2" path="/var/lib/kubelet/pods/423c812c-5dbb-4719-9b76-c782c05ef6f2/volumes" Jan 28 15:31:27 crc kubenswrapper[4893]: I0128 15:31:27.077149 4893 generic.go:334] "Generic (PLEG): container finished" podID="0987bc27-2528-4462-bcd5-0941ec12bef4" containerID="d5c3de7719f40d45a4d342b029b78df9d105f54ea909241cfe57f636e35ecf42" exitCode=143 Jan 28 15:31:27 crc kubenswrapper[4893]: I0128 15:31:27.077190 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0987bc27-2528-4462-bcd5-0941ec12bef4","Type":"ContainerDied","Data":"d5c3de7719f40d45a4d342b029b78df9d105f54ea909241cfe57f636e35ecf42"} Jan 28 15:31:29 crc kubenswrapper[4893]: I0128 15:31:29.832022 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:31:29 crc kubenswrapper[4893]: I0128 15:31:29.869677 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0987bc27-2528-4462-bcd5-0941ec12bef4-logs\") pod \"0987bc27-2528-4462-bcd5-0941ec12bef4\" (UID: \"0987bc27-2528-4462-bcd5-0941ec12bef4\") " Jan 28 15:31:29 crc kubenswrapper[4893]: I0128 15:31:29.869777 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqvcd\" (UniqueName: \"kubernetes.io/projected/0987bc27-2528-4462-bcd5-0941ec12bef4-kube-api-access-kqvcd\") pod \"0987bc27-2528-4462-bcd5-0941ec12bef4\" (UID: \"0987bc27-2528-4462-bcd5-0941ec12bef4\") " Jan 28 15:31:29 crc kubenswrapper[4893]: I0128 15:31:29.869976 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0987bc27-2528-4462-bcd5-0941ec12bef4-config-data\") pod \"0987bc27-2528-4462-bcd5-0941ec12bef4\" (UID: \"0987bc27-2528-4462-bcd5-0941ec12bef4\") " Jan 28 15:31:29 crc kubenswrapper[4893]: I0128 15:31:29.873824 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0987bc27-2528-4462-bcd5-0941ec12bef4-logs" (OuterVolumeSpecName: "logs") pod "0987bc27-2528-4462-bcd5-0941ec12bef4" (UID: "0987bc27-2528-4462-bcd5-0941ec12bef4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:29 crc kubenswrapper[4893]: I0128 15:31:29.913990 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0987bc27-2528-4462-bcd5-0941ec12bef4-kube-api-access-kqvcd" (OuterVolumeSpecName: "kube-api-access-kqvcd") pod "0987bc27-2528-4462-bcd5-0941ec12bef4" (UID: "0987bc27-2528-4462-bcd5-0941ec12bef4"). InnerVolumeSpecName "kube-api-access-kqvcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:29 crc kubenswrapper[4893]: I0128 15:31:29.918944 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0987bc27-2528-4462-bcd5-0941ec12bef4-config-data" (OuterVolumeSpecName: "config-data") pod "0987bc27-2528-4462-bcd5-0941ec12bef4" (UID: "0987bc27-2528-4462-bcd5-0941ec12bef4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:29 crc kubenswrapper[4893]: I0128 15:31:29.972230 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0987bc27-2528-4462-bcd5-0941ec12bef4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:29 crc kubenswrapper[4893]: I0128 15:31:29.972281 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0987bc27-2528-4462-bcd5-0941ec12bef4-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:29 crc kubenswrapper[4893]: I0128 15:31:29.972294 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqvcd\" (UniqueName: \"kubernetes.io/projected/0987bc27-2528-4462-bcd5-0941ec12bef4-kube-api-access-kqvcd\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.038595 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.074045 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e0172a-6fe8-43d0-97ba-3ea57089d58d-config-data\") pod \"87e0172a-6fe8-43d0-97ba-3ea57089d58d\" (UID: \"87e0172a-6fe8-43d0-97ba-3ea57089d58d\") " Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.074175 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrgx5\" (UniqueName: \"kubernetes.io/projected/87e0172a-6fe8-43d0-97ba-3ea57089d58d-kube-api-access-jrgx5\") pod \"87e0172a-6fe8-43d0-97ba-3ea57089d58d\" (UID: \"87e0172a-6fe8-43d0-97ba-3ea57089d58d\") " Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.082840 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e0172a-6fe8-43d0-97ba-3ea57089d58d-kube-api-access-jrgx5" (OuterVolumeSpecName: "kube-api-access-jrgx5") pod "87e0172a-6fe8-43d0-97ba-3ea57089d58d" (UID: "87e0172a-6fe8-43d0-97ba-3ea57089d58d"). InnerVolumeSpecName "kube-api-access-jrgx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.095047 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87e0172a-6fe8-43d0-97ba-3ea57089d58d-config-data" (OuterVolumeSpecName: "config-data") pod "87e0172a-6fe8-43d0-97ba-3ea57089d58d" (UID: "87e0172a-6fe8-43d0-97ba-3ea57089d58d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.120881 4893 generic.go:334] "Generic (PLEG): container finished" podID="0987bc27-2528-4462-bcd5-0941ec12bef4" containerID="a389eef2c3e0a96e4d12701462d49828608a25645a60d4ee7ee518881b050c1e" exitCode=0 Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.120988 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0987bc27-2528-4462-bcd5-0941ec12bef4","Type":"ContainerDied","Data":"a389eef2c3e0a96e4d12701462d49828608a25645a60d4ee7ee518881b050c1e"} Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.121087 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0987bc27-2528-4462-bcd5-0941ec12bef4","Type":"ContainerDied","Data":"a0f608c11b206a1a00f46a813a267d4b9a92cdd93593f804b5bd85ed20aaf8fc"} Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.121024 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.121149 4893 scope.go:117] "RemoveContainer" containerID="a389eef2c3e0a96e4d12701462d49828608a25645a60d4ee7ee518881b050c1e" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.126189 4893 generic.go:334] "Generic (PLEG): container finished" podID="87e0172a-6fe8-43d0-97ba-3ea57089d58d" containerID="ba76a74c5e15d214d9ff5666bdead74b3e974634b822003b81259ff501b75995" exitCode=0 Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.126271 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"87e0172a-6fe8-43d0-97ba-3ea57089d58d","Type":"ContainerDied","Data":"ba76a74c5e15d214d9ff5666bdead74b3e974634b822003b81259ff501b75995"} Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.126327 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.126352 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"87e0172a-6fe8-43d0-97ba-3ea57089d58d","Type":"ContainerDied","Data":"679fc33aafa7bdebce9f72bff8d69d22b9916ff9d9122d78952232d7ca374755"} Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.169609 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.174022 4893 scope.go:117] "RemoveContainer" containerID="d5c3de7719f40d45a4d342b029b78df9d105f54ea909241cfe57f636e35ecf42" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.180568 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87e0172a-6fe8-43d0-97ba-3ea57089d58d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.180616 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrgx5\" (UniqueName: \"kubernetes.io/projected/87e0172a-6fe8-43d0-97ba-3ea57089d58d-kube-api-access-jrgx5\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.193776 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.204655 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.207210 4893 scope.go:117] "RemoveContainer" containerID="a389eef2c3e0a96e4d12701462d49828608a25645a60d4ee7ee518881b050c1e" Jan 28 15:31:30 crc kubenswrapper[4893]: E0128 15:31:30.207829 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a389eef2c3e0a96e4d12701462d49828608a25645a60d4ee7ee518881b050c1e\": container with ID starting with a389eef2c3e0a96e4d12701462d49828608a25645a60d4ee7ee518881b050c1e not found: ID does not exist" containerID="a389eef2c3e0a96e4d12701462d49828608a25645a60d4ee7ee518881b050c1e" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.207879 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a389eef2c3e0a96e4d12701462d49828608a25645a60d4ee7ee518881b050c1e"} err="failed to get container status \"a389eef2c3e0a96e4d12701462d49828608a25645a60d4ee7ee518881b050c1e\": rpc error: code = NotFound desc = could not find container \"a389eef2c3e0a96e4d12701462d49828608a25645a60d4ee7ee518881b050c1e\": container with ID starting with a389eef2c3e0a96e4d12701462d49828608a25645a60d4ee7ee518881b050c1e not found: ID does not exist" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.207912 4893 scope.go:117] "RemoveContainer" containerID="d5c3de7719f40d45a4d342b029b78df9d105f54ea909241cfe57f636e35ecf42" Jan 28 15:31:30 crc kubenswrapper[4893]: E0128 15:31:30.208250 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5c3de7719f40d45a4d342b029b78df9d105f54ea909241cfe57f636e35ecf42\": container with ID starting with d5c3de7719f40d45a4d342b029b78df9d105f54ea909241cfe57f636e35ecf42 not found: ID does not exist" containerID="d5c3de7719f40d45a4d342b029b78df9d105f54ea909241cfe57f636e35ecf42" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.208287 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5c3de7719f40d45a4d342b029b78df9d105f54ea909241cfe57f636e35ecf42"} err="failed to get container status \"d5c3de7719f40d45a4d342b029b78df9d105f54ea909241cfe57f636e35ecf42\": rpc error: code = NotFound desc = could not find container \"d5c3de7719f40d45a4d342b029b78df9d105f54ea909241cfe57f636e35ecf42\": container with ID starting with d5c3de7719f40d45a4d342b029b78df9d105f54ea909241cfe57f636e35ecf42 not found: ID does not exist" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.208305 4893 scope.go:117] "RemoveContainer" containerID="ba76a74c5e15d214d9ff5666bdead74b3e974634b822003b81259ff501b75995" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.214180 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.230453 4893 scope.go:117] "RemoveContainer" containerID="ba76a74c5e15d214d9ff5666bdead74b3e974634b822003b81259ff501b75995" Jan 28 15:31:30 crc kubenswrapper[4893]: E0128 15:31:30.232845 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba76a74c5e15d214d9ff5666bdead74b3e974634b822003b81259ff501b75995\": container with ID starting with ba76a74c5e15d214d9ff5666bdead74b3e974634b822003b81259ff501b75995 not found: ID does not exist" containerID="ba76a74c5e15d214d9ff5666bdead74b3e974634b822003b81259ff501b75995" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.232891 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba76a74c5e15d214d9ff5666bdead74b3e974634b822003b81259ff501b75995"} err="failed to get container status \"ba76a74c5e15d214d9ff5666bdead74b3e974634b822003b81259ff501b75995\": rpc error: code = NotFound desc = could not find container \"ba76a74c5e15d214d9ff5666bdead74b3e974634b822003b81259ff501b75995\": container with ID starting with ba76a74c5e15d214d9ff5666bdead74b3e974634b822003b81259ff501b75995 not found: ID does not exist" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.468359 4893 scope.go:117] "RemoveContainer" containerID="91ec5c55c303d83089e45524a04ff93cf9a04b0599232146a5e43f8051d87b28" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.559781 4893 scope.go:117] "RemoveContainer" containerID="92bbf08e3e22b539dba9f2b586ccb1a7c7e9c9a08c3307ac3926134d8c83c4a1" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.618441 4893 scope.go:117] "RemoveContainer" containerID="102f86e791e595c41921982d56f27af9e40d62121b68af2de2f1b5ced550ef25" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.646084 4893 scope.go:117] "RemoveContainer" containerID="0a7c41e2443e1f8abad80e7dea8f4b931d01e7d7b695254ae6c1ff943aa2f6a9" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.707512 4893 scope.go:117] "RemoveContainer" containerID="2e5a86aea1d478fc50596210b63ecfa9b4f3ea61a714dafed280d35645141fc2" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.730067 4893 scope.go:117] "RemoveContainer" containerID="95417485af8fffaf3fa67426d68174802c0a060eff2a78ee01c981b94c80b2bb" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.788253 4893 scope.go:117] "RemoveContainer" containerID="825c2f1a27ab9e2362363d6606344f75e84fe2c64f7054a93d740a2bf72c9873" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.810082 4893 scope.go:117] "RemoveContainer" containerID="ca718ddd9e8cb38fb4d1ad7c50570e51c05f3d9f7d4934b889b7ff676520fb34" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.919899 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0987bc27-2528-4462-bcd5-0941ec12bef4" path="/var/lib/kubelet/pods/0987bc27-2528-4462-bcd5-0941ec12bef4/volumes" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.920795 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e0172a-6fe8-43d0-97ba-3ea57089d58d" path="/var/lib/kubelet/pods/87e0172a-6fe8-43d0-97ba-3ea57089d58d/volumes" Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.925254 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.925516 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="32c33a83-8802-40c5-94ac-8943e8e5df5f" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b" gracePeriod=30 Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.947569 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.947902 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="9acab649-6a00-44d1-ab58-501d4059248c" containerName="nova-kuttl-metadata-log" containerID="cri-o://5dd39be6f8ea08cb5c422922a7c1bc60d9e5d7ffbb6e3136a1e0bc27e6c2bf90" gracePeriod=30 Jan 28 15:31:30 crc kubenswrapper[4893]: I0128 15:31:30.947911 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="9acab649-6a00-44d1-ab58-501d4059248c" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://06ab4639500e3f42c2b232dea862b31585b98071c77b693f7169db9521aeb57b" gracePeriod=30 Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.124336 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.125263 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://df29e68cc60a3c12a95d57e15180d096837b10f3e05df9dbc7b84c86eb96d3fe" gracePeriod=30 Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.157054 4893 generic.go:334] "Generic (PLEG): container finished" podID="9acab649-6a00-44d1-ab58-501d4059248c" containerID="5dd39be6f8ea08cb5c422922a7c1bc60d9e5d7ffbb6e3136a1e0bc27e6c2bf90" exitCode=143 Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.157107 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"9acab649-6a00-44d1-ab58-501d4059248c","Type":"ContainerDied","Data":"5dd39be6f8ea08cb5c422922a7c1bc60d9e5d7ffbb6e3136a1e0bc27e6c2bf90"} Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.192781 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9acab649_6a00_44d1_ab58_501d4059248c.slice/crio-conmon-5dd39be6f8ea08cb5c422922a7c1bc60d9e5d7ffbb6e3136a1e0bc27e6c2bf90.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.490532 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9"] Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.503539 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m"] Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.512088 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-wnfq9"] Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.521121 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-9m22m"] Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.670418 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell0d7da-account-delete-8qlsp"] Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.673776 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90cbda0-f31c-4d68-8a5d-288d1e651c62" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.673839 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90cbda0-f31c-4d68-8a5d-288d1e651c62" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.673852 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" containerName="nova-kuttl-metadata-log" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.673860 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" containerName="nova-kuttl-metadata-log" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.673879 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" containerName="nova-kuttl-api-api" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.673889 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" containerName="nova-kuttl-api-api" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.673913 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0987bc27-2528-4462-bcd5-0941ec12bef4" containerName="nova-kuttl-api-log" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.673925 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0987bc27-2528-4462-bcd5-0941ec12bef4" containerName="nova-kuttl-api-log" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.673939 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0987bc27-2528-4462-bcd5-0941ec12bef4" containerName="nova-kuttl-api-api" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.673948 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0987bc27-2528-4462-bcd5-0941ec12bef4" containerName="nova-kuttl-api-api" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.673965 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" containerName="nova-kuttl-api-log" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.673974 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" containerName="nova-kuttl-api-log" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.673992 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" containerName="nova-kuttl-metadata-metadata" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674001 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" containerName="nova-kuttl-metadata-metadata" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.674015 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b747a8ff-f3ba-438c-84a5-2175ace99287" containerName="nova-kuttl-metadata-log" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674025 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b747a8ff-f3ba-438c-84a5-2175ace99287" containerName="nova-kuttl-metadata-log" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.674035 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ddcf88-0be9-47cb-9301-0618b47d28f6" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674044 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ddcf88-0be9-47cb-9301-0618b47d28f6" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.674060 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b747a8ff-f3ba-438c-84a5-2175ace99287" containerName="nova-kuttl-metadata-metadata" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674067 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b747a8ff-f3ba-438c-84a5-2175ace99287" containerName="nova-kuttl-metadata-metadata" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.674081 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bd2f429-3780-4f91-8eeb-fab736e0ff82" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674089 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bd2f429-3780-4f91-8eeb-fab736e0ff82" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.674105 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674116 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.674128 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e0172a-6fe8-43d0-97ba-3ea57089d58d" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674139 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e0172a-6fe8-43d0-97ba-3ea57089d58d" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.674155 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747647df-2218-4f3a-a1f4-132b662282ac" containerName="nova-kuttl-api-api" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674162 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="747647df-2218-4f3a-a1f4-132b662282ac" containerName="nova-kuttl-api-api" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.674177 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4802233c-9a5c-4074-b1ae-df434de29109" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674188 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4802233c-9a5c-4074-b1ae-df434de29109" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.674201 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b48ebb-b212-4370-aee1-db0d64b7a446" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674210 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b48ebb-b212-4370-aee1-db0d64b7a446" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: E0128 15:31:31.674228 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747647df-2218-4f3a-a1f4-132b662282ac" containerName="nova-kuttl-api-log" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674236 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="747647df-2218-4f3a-a1f4-132b662282ac" containerName="nova-kuttl-api-log" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674489 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b747a8ff-f3ba-438c-84a5-2175ace99287" containerName="nova-kuttl-metadata-metadata" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674513 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4802233c-9a5c-4074-b1ae-df434de29109" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674523 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" containerName="nova-kuttl-metadata-log" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674533 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" containerName="nova-kuttl-api-api" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674550 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0987bc27-2528-4462-bcd5-0941ec12bef4" containerName="nova-kuttl-api-api" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674559 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dab9ea9-5d9d-436d-bbfe-4dae18a6e224" containerName="nova-kuttl-api-log" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674570 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0b48ebb-b212-4370-aee1-db0d64b7a446" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674592 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="e92cc5e6-dd20-4dde-9a09-7c8fbc646ca6" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674603 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0987bc27-2528-4462-bcd5-0941ec12bef4" containerName="nova-kuttl-api-log" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674619 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ddcf88-0be9-47cb-9301-0618b47d28f6" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674639 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e0172a-6fe8-43d0-97ba-3ea57089d58d" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674651 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="747647df-2218-4f3a-a1f4-132b662282ac" containerName="nova-kuttl-api-log" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674663 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d90cbda0-f31c-4d68-8a5d-288d1e651c62" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674676 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="747647df-2218-4f3a-a1f4-132b662282ac" containerName="nova-kuttl-api-api" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674687 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bd2f429-3780-4f91-8eeb-fab736e0ff82" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674699 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b747a8ff-f3ba-438c-84a5-2175ace99287" containerName="nova-kuttl-metadata-log" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.674709 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a91c2c-ade3-4dd7-983a-49eda7bf545b" containerName="nova-kuttl-metadata-metadata" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.675906 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.703908 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0d7da-account-delete-8qlsp"] Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.740325 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd88a0d-84a3-462a-b7fc-0c4dea75ed65-operator-scripts\") pod \"novacell0d7da-account-delete-8qlsp\" (UID: \"3bd88a0d-84a3-462a-b7fc-0c4dea75ed65\") " pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.740451 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h59fg\" (UniqueName: \"kubernetes.io/projected/3bd88a0d-84a3-462a-b7fc-0c4dea75ed65-kube-api-access-h59fg\") pod \"novacell0d7da-account-delete-8qlsp\" (UID: \"3bd88a0d-84a3-462a-b7fc-0c4dea75ed65\") " pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.760883 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapi9f40-account-delete-48tdh"] Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.762249 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.776939 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi9f40-account-delete-48tdh"] Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.844921 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba55229e-1b8b-4f31-8b36-cf087710bd12-operator-scripts\") pod \"novaapi9f40-account-delete-48tdh\" (UID: \"ba55229e-1b8b-4f31-8b36-cf087710bd12\") " pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.845002 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd88a0d-84a3-462a-b7fc-0c4dea75ed65-operator-scripts\") pod \"novacell0d7da-account-delete-8qlsp\" (UID: \"3bd88a0d-84a3-462a-b7fc-0c4dea75ed65\") " pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.845077 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h59fg\" (UniqueName: \"kubernetes.io/projected/3bd88a0d-84a3-462a-b7fc-0c4dea75ed65-kube-api-access-h59fg\") pod \"novacell0d7da-account-delete-8qlsp\" (UID: \"3bd88a0d-84a3-462a-b7fc-0c4dea75ed65\") " pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.845169 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc8vn\" (UniqueName: \"kubernetes.io/projected/ba55229e-1b8b-4f31-8b36-cf087710bd12-kube-api-access-fc8vn\") pod \"novaapi9f40-account-delete-48tdh\" (UID: \"ba55229e-1b8b-4f31-8b36-cf087710bd12\") " pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.846266 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd88a0d-84a3-462a-b7fc-0c4dea75ed65-operator-scripts\") pod \"novacell0d7da-account-delete-8qlsp\" (UID: \"3bd88a0d-84a3-462a-b7fc-0c4dea75ed65\") " pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.860615 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell1b9a3-account-delete-lhppw"] Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.862064 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.868572 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1b9a3-account-delete-lhppw"] Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.894333 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.895193 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="3a4152d8-cd1c-478b-977c-3542b4ccf601" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://db0c2cca79c16e5b10bee79aa8e77ced55b5a7abcd299e5cab7c07470cdd2f4d" gracePeriod=30 Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.912991 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h59fg\" (UniqueName: \"kubernetes.io/projected/3bd88a0d-84a3-462a-b7fc-0c4dea75ed65-kube-api-access-h59fg\") pod \"novacell0d7da-account-delete-8qlsp\" (UID: \"3bd88a0d-84a3-462a-b7fc-0c4dea75ed65\") " pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.946166 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc8vn\" (UniqueName: \"kubernetes.io/projected/ba55229e-1b8b-4f31-8b36-cf087710bd12-kube-api-access-fc8vn\") pod \"novaapi9f40-account-delete-48tdh\" (UID: \"ba55229e-1b8b-4f31-8b36-cf087710bd12\") " pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.946235 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba55229e-1b8b-4f31-8b36-cf087710bd12-operator-scripts\") pod \"novaapi9f40-account-delete-48tdh\" (UID: \"ba55229e-1b8b-4f31-8b36-cf087710bd12\") " pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.946326 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krznl\" (UniqueName: \"kubernetes.io/projected/5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b-kube-api-access-krznl\") pod \"novacell1b9a3-account-delete-lhppw\" (UID: \"5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b\") " pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.946354 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b-operator-scripts\") pod \"novacell1b9a3-account-delete-lhppw\" (UID: \"5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b\") " pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.946992 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba55229e-1b8b-4f31-8b36-cf087710bd12-operator-scripts\") pod \"novaapi9f40-account-delete-48tdh\" (UID: \"ba55229e-1b8b-4f31-8b36-cf087710bd12\") " pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" Jan 28 15:31:31 crc kubenswrapper[4893]: I0128 15:31:31.979463 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc8vn\" (UniqueName: \"kubernetes.io/projected/ba55229e-1b8b-4f31-8b36-cf087710bd12-kube-api-access-fc8vn\") pod \"novaapi9f40-account-delete-48tdh\" (UID: \"ba55229e-1b8b-4f31-8b36-cf087710bd12\") " pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" Jan 28 15:31:32 crc kubenswrapper[4893]: I0128 15:31:32.038284 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" Jan 28 15:31:32 crc kubenswrapper[4893]: I0128 15:31:32.042545 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="3a4152d8-cd1c-478b-977c-3542b4ccf601" containerName="nova-kuttl-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"http://10.217.0.154:6080/vnc_lite.html\": dial tcp 10.217.0.154:6080: connect: connection refused" Jan 28 15:31:32 crc kubenswrapper[4893]: I0128 15:31:32.047857 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krznl\" (UniqueName: \"kubernetes.io/projected/5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b-kube-api-access-krznl\") pod \"novacell1b9a3-account-delete-lhppw\" (UID: \"5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b\") " pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" Jan 28 15:31:32 crc kubenswrapper[4893]: I0128 15:31:32.048094 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b-operator-scripts\") pod \"novacell1b9a3-account-delete-lhppw\" (UID: \"5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b\") " pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" Jan 28 15:31:32 crc kubenswrapper[4893]: I0128 15:31:32.049005 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b-operator-scripts\") pod \"novacell1b9a3-account-delete-lhppw\" (UID: \"5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b\") " pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" Jan 28 15:31:32 crc kubenswrapper[4893]: I0128 15:31:32.070997 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krznl\" (UniqueName: \"kubernetes.io/projected/5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b-kube-api-access-krznl\") pod \"novacell1b9a3-account-delete-lhppw\" (UID: \"5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b\") " pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" Jan 28 15:31:32 crc kubenswrapper[4893]: I0128 15:31:32.110149 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" Jan 28 15:31:32 crc kubenswrapper[4893]: I0128 15:31:32.256321 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" Jan 28 15:31:32 crc kubenswrapper[4893]: I0128 15:31:32.599518 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0d7da-account-delete-8qlsp"] Jan 28 15:31:32 crc kubenswrapper[4893]: I0128 15:31:32.683973 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi9f40-account-delete-48tdh"] Jan 28 15:31:32 crc kubenswrapper[4893]: I0128 15:31:32.781272 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1b9a3-account-delete-lhppw"] Jan 28 15:31:32 crc kubenswrapper[4893]: I0128 15:31:32.907331 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:31:32 crc kubenswrapper[4893]: E0128 15:31:32.908515 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:31:32 crc kubenswrapper[4893]: I0128 15:31:32.939369 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7bcd70-52c5-4df8-8c09-28881a2fa384" path="/var/lib/kubelet/pods/5c7bcd70-52c5-4df8-8c09-28881a2fa384/volumes" Jan 28 15:31:32 crc kubenswrapper[4893]: I0128 15:31:32.939958 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c67810-713f-40ef-a19c-f7a726b17271" path="/var/lib/kubelet/pods/b7c67810-713f-40ef-a19c-f7a726b17271/volumes" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.067333 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.172728 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4152d8-cd1c-478b-977c-3542b4ccf601-config-data\") pod \"3a4152d8-cd1c-478b-977c-3542b4ccf601\" (UID: \"3a4152d8-cd1c-478b-977c-3542b4ccf601\") " Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.172957 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drksj\" (UniqueName: \"kubernetes.io/projected/3a4152d8-cd1c-478b-977c-3542b4ccf601-kube-api-access-drksj\") pod \"3a4152d8-cd1c-478b-977c-3542b4ccf601\" (UID: \"3a4152d8-cd1c-478b-977c-3542b4ccf601\") " Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.185463 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a4152d8-cd1c-478b-977c-3542b4ccf601-kube-api-access-drksj" (OuterVolumeSpecName: "kube-api-access-drksj") pod "3a4152d8-cd1c-478b-977c-3542b4ccf601" (UID: "3a4152d8-cd1c-478b-977c-3542b4ccf601"). InnerVolumeSpecName "kube-api-access-drksj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.190046 4893 generic.go:334] "Generic (PLEG): container finished" podID="3a4152d8-cd1c-478b-977c-3542b4ccf601" containerID="db0c2cca79c16e5b10bee79aa8e77ced55b5a7abcd299e5cab7c07470cdd2f4d" exitCode=0 Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.190166 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.190181 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"3a4152d8-cd1c-478b-977c-3542b4ccf601","Type":"ContainerDied","Data":"db0c2cca79c16e5b10bee79aa8e77ced55b5a7abcd299e5cab7c07470cdd2f4d"} Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.191582 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"3a4152d8-cd1c-478b-977c-3542b4ccf601","Type":"ContainerDied","Data":"d1c961d990fc579efe3d4f50e2027896cf4ba1246d3ad30e73f0054ed48f0eeb"} Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.191636 4893 scope.go:117] "RemoveContainer" containerID="db0c2cca79c16e5b10bee79aa8e77ced55b5a7abcd299e5cab7c07470cdd2f4d" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.192927 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" event={"ID":"3bd88a0d-84a3-462a-b7fc-0c4dea75ed65","Type":"ContainerStarted","Data":"c349ed8d33f428bfc9fe593c73b18a2fd3b6b0e70a38ea342dda8fd66a8f99c9"} Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.192997 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" event={"ID":"3bd88a0d-84a3-462a-b7fc-0c4dea75ed65","Type":"ContainerStarted","Data":"89a1348853585da8c91bd4daa52a6fc43d723aeeb0ff4e878f6eef68545d7cfa"} Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.208711 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a4152d8-cd1c-478b-977c-3542b4ccf601-config-data" (OuterVolumeSpecName: "config-data") pod "3a4152d8-cd1c-478b-977c-3542b4ccf601" (UID: "3a4152d8-cd1c-478b-977c-3542b4ccf601"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.208939 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" event={"ID":"ba55229e-1b8b-4f31-8b36-cf087710bd12","Type":"ContainerStarted","Data":"4bbcac9de4ae0c2c1c1bc6069ca1d9a27ee98144bf47baea9bee7efa48f036e0"} Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.209006 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" event={"ID":"ba55229e-1b8b-4f31-8b36-cf087710bd12","Type":"ContainerStarted","Data":"0830c3458a6372e752a2a138d0597bf59ecb972bb72c6e74f65549e2779a06b9"} Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.216010 4893 scope.go:117] "RemoveContainer" containerID="db0c2cca79c16e5b10bee79aa8e77ced55b5a7abcd299e5cab7c07470cdd2f4d" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.216686 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" event={"ID":"5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b","Type":"ContainerStarted","Data":"93504058c39d3c4aa71bddcce276392c7a9bfdbbda084d7f5fc64a4e16f0eb5a"} Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.216805 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" event={"ID":"5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b","Type":"ContainerStarted","Data":"f05fdd696f6f516fae22806afb9a23697104b4eb30c4aad3e5e5fe61357aa51f"} Jan 28 15:31:33 crc kubenswrapper[4893]: E0128 15:31:33.217030 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0c2cca79c16e5b10bee79aa8e77ced55b5a7abcd299e5cab7c07470cdd2f4d\": container with ID starting with db0c2cca79c16e5b10bee79aa8e77ced55b5a7abcd299e5cab7c07470cdd2f4d not found: ID does not exist" containerID="db0c2cca79c16e5b10bee79aa8e77ced55b5a7abcd299e5cab7c07470cdd2f4d" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.217133 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0c2cca79c16e5b10bee79aa8e77ced55b5a7abcd299e5cab7c07470cdd2f4d"} err="failed to get container status \"db0c2cca79c16e5b10bee79aa8e77ced55b5a7abcd299e5cab7c07470cdd2f4d\": rpc error: code = NotFound desc = could not find container \"db0c2cca79c16e5b10bee79aa8e77ced55b5a7abcd299e5cab7c07470cdd2f4d\": container with ID starting with db0c2cca79c16e5b10bee79aa8e77ced55b5a7abcd299e5cab7c07470cdd2f4d not found: ID does not exist" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.244926 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" podStartSLOduration=2.244907881 podStartE2EDuration="2.244907881s" podCreationTimestamp="2026-01-28 15:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:31:33.221464686 +0000 UTC m=+1810.995079714" watchObservedRunningTime="2026-01-28 15:31:33.244907881 +0000 UTC m=+1811.018522919" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.246536 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" podStartSLOduration=2.2465233749999998 podStartE2EDuration="2.246523375s" podCreationTimestamp="2026-01-28 15:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:31:33.244904651 +0000 UTC m=+1811.018519689" watchObservedRunningTime="2026-01-28 15:31:33.246523375 +0000 UTC m=+1811.020138403" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.285314 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a4152d8-cd1c-478b-977c-3542b4ccf601-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.285360 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drksj\" (UniqueName: \"kubernetes.io/projected/3a4152d8-cd1c-478b-977c-3542b4ccf601-kube-api-access-drksj\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.292612 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" podStartSLOduration=2.292588554 podStartE2EDuration="2.292588554s" podCreationTimestamp="2026-01-28 15:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:31:33.260995728 +0000 UTC m=+1811.034610756" watchObservedRunningTime="2026-01-28 15:31:33.292588554 +0000 UTC m=+1811.066203582" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.563069 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.572577 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.763575 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.895824 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnpwt\" (UniqueName: \"kubernetes.io/projected/6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f-kube-api-access-bnpwt\") pod \"6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f\" (UID: \"6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f\") " Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.896150 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f-config-data\") pod \"6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f\" (UID: \"6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f\") " Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.900555 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f-kube-api-access-bnpwt" (OuterVolumeSpecName: "kube-api-access-bnpwt") pod "6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f" (UID: "6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f"). InnerVolumeSpecName "kube-api-access-bnpwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.930523 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f-config-data" (OuterVolumeSpecName: "config-data") pod "6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f" (UID: "6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.998059 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnpwt\" (UniqueName: \"kubernetes.io/projected/6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f-kube-api-access-bnpwt\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:33 crc kubenswrapper[4893]: I0128 15:31:33.998138 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.226391 4893 generic.go:334] "Generic (PLEG): container finished" podID="9acab649-6a00-44d1-ab58-501d4059248c" containerID="06ab4639500e3f42c2b232dea862b31585b98071c77b693f7169db9521aeb57b" exitCode=0 Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.226446 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"9acab649-6a00-44d1-ab58-501d4059248c","Type":"ContainerDied","Data":"06ab4639500e3f42c2b232dea862b31585b98071c77b693f7169db9521aeb57b"} Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.227651 4893 generic.go:334] "Generic (PLEG): container finished" podID="ba55229e-1b8b-4f31-8b36-cf087710bd12" containerID="4bbcac9de4ae0c2c1c1bc6069ca1d9a27ee98144bf47baea9bee7efa48f036e0" exitCode=0 Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.227692 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" event={"ID":"ba55229e-1b8b-4f31-8b36-cf087710bd12","Type":"ContainerDied","Data":"4bbcac9de4ae0c2c1c1bc6069ca1d9a27ee98144bf47baea9bee7efa48f036e0"} Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.234530 4893 generic.go:334] "Generic (PLEG): container finished" podID="5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b" containerID="93504058c39d3c4aa71bddcce276392c7a9bfdbbda084d7f5fc64a4e16f0eb5a" exitCode=0 Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.234618 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" event={"ID":"5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b","Type":"ContainerDied","Data":"93504058c39d3c4aa71bddcce276392c7a9bfdbbda084d7f5fc64a4e16f0eb5a"} Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.238737 4893 generic.go:334] "Generic (PLEG): container finished" podID="3bd88a0d-84a3-462a-b7fc-0c4dea75ed65" containerID="c349ed8d33f428bfc9fe593c73b18a2fd3b6b0e70a38ea342dda8fd66a8f99c9" exitCode=0 Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.238807 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" event={"ID":"3bd88a0d-84a3-462a-b7fc-0c4dea75ed65","Type":"ContainerDied","Data":"c349ed8d33f428bfc9fe593c73b18a2fd3b6b0e70a38ea342dda8fd66a8f99c9"} Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.240781 4893 generic.go:334] "Generic (PLEG): container finished" podID="6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f" containerID="df29e68cc60a3c12a95d57e15180d096837b10f3e05df9dbc7b84c86eb96d3fe" exitCode=0 Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.240816 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f","Type":"ContainerDied","Data":"df29e68cc60a3c12a95d57e15180d096837b10f3e05df9dbc7b84c86eb96d3fe"} Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.240835 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f","Type":"ContainerDied","Data":"c2dd1200bd7422eda215b044e929d3b92a746a7ebc095161e06ee52c31f10390"} Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.240854 4893 scope.go:117] "RemoveContainer" containerID="df29e68cc60a3c12a95d57e15180d096837b10f3e05df9dbc7b84c86eb96d3fe" Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.240973 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.284326 4893 scope.go:117] "RemoveContainer" containerID="df29e68cc60a3c12a95d57e15180d096837b10f3e05df9dbc7b84c86eb96d3fe" Jan 28 15:31:34 crc kubenswrapper[4893]: E0128 15:31:34.285961 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df29e68cc60a3c12a95d57e15180d096837b10f3e05df9dbc7b84c86eb96d3fe\": container with ID starting with df29e68cc60a3c12a95d57e15180d096837b10f3e05df9dbc7b84c86eb96d3fe not found: ID does not exist" containerID="df29e68cc60a3c12a95d57e15180d096837b10f3e05df9dbc7b84c86eb96d3fe" Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.286273 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df29e68cc60a3c12a95d57e15180d096837b10f3e05df9dbc7b84c86eb96d3fe"} err="failed to get container status \"df29e68cc60a3c12a95d57e15180d096837b10f3e05df9dbc7b84c86eb96d3fe\": rpc error: code = NotFound desc = could not find container \"df29e68cc60a3c12a95d57e15180d096837b10f3e05df9dbc7b84c86eb96d3fe\": container with ID starting with df29e68cc60a3c12a95d57e15180d096837b10f3e05df9dbc7b84c86eb96d3fe not found: ID does not exist" Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.296999 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:31:34 crc kubenswrapper[4893]: E0128 15:31:34.298551 4893 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.9:60782->38.102.83.9:46815: read tcp 38.102.83.9:60782->38.102.83.9:46815: read: connection reset by peer Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.305331 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.479634 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.608621 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9acab649-6a00-44d1-ab58-501d4059248c-logs\") pod \"9acab649-6a00-44d1-ab58-501d4059248c\" (UID: \"9acab649-6a00-44d1-ab58-501d4059248c\") " Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.609035 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6chsf\" (UniqueName: \"kubernetes.io/projected/9acab649-6a00-44d1-ab58-501d4059248c-kube-api-access-6chsf\") pod \"9acab649-6a00-44d1-ab58-501d4059248c\" (UID: \"9acab649-6a00-44d1-ab58-501d4059248c\") " Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.609208 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acab649-6a00-44d1-ab58-501d4059248c-config-data\") pod \"9acab649-6a00-44d1-ab58-501d4059248c\" (UID: \"9acab649-6a00-44d1-ab58-501d4059248c\") " Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.609699 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9acab649-6a00-44d1-ab58-501d4059248c-logs" (OuterVolumeSpecName: "logs") pod "9acab649-6a00-44d1-ab58-501d4059248c" (UID: "9acab649-6a00-44d1-ab58-501d4059248c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.616318 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9acab649-6a00-44d1-ab58-501d4059248c-kube-api-access-6chsf" (OuterVolumeSpecName: "kube-api-access-6chsf") pod "9acab649-6a00-44d1-ab58-501d4059248c" (UID: "9acab649-6a00-44d1-ab58-501d4059248c"). InnerVolumeSpecName "kube-api-access-6chsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.641361 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9acab649-6a00-44d1-ab58-501d4059248c-config-data" (OuterVolumeSpecName: "config-data") pod "9acab649-6a00-44d1-ab58-501d4059248c" (UID: "9acab649-6a00-44d1-ab58-501d4059248c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.711531 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9acab649-6a00-44d1-ab58-501d4059248c-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.711631 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6chsf\" (UniqueName: \"kubernetes.io/projected/9acab649-6a00-44d1-ab58-501d4059248c-kube-api-access-6chsf\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.711649 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acab649-6a00-44d1-ab58-501d4059248c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.902641 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a4152d8-cd1c-478b-977c-3542b4ccf601" path="/var/lib/kubelet/pods/3a4152d8-cd1c-478b-977c-3542b4ccf601/volumes" Jan 28 15:31:34 crc kubenswrapper[4893]: I0128 15:31:34.903723 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f" path="/var/lib/kubelet/pods/6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f/volumes" Jan 28 15:31:35 crc kubenswrapper[4893]: I0128 15:31:35.256189 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"9acab649-6a00-44d1-ab58-501d4059248c","Type":"ContainerDied","Data":"62bdc803b29f847c1296542fa6e1117b35fea9f2a46aca3cb97b1fc5954a56c5"} Jan 28 15:31:35 crc kubenswrapper[4893]: I0128 15:31:35.256272 4893 scope.go:117] "RemoveContainer" containerID="06ab4639500e3f42c2b232dea862b31585b98071c77b693f7169db9521aeb57b" Jan 28 15:31:35 crc kubenswrapper[4893]: I0128 15:31:35.256403 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:31:35 crc kubenswrapper[4893]: I0128 15:31:35.288788 4893 scope.go:117] "RemoveContainer" containerID="5dd39be6f8ea08cb5c422922a7c1bc60d9e5d7ffbb6e3136a1e0bc27e6c2bf90" Jan 28 15:31:35 crc kubenswrapper[4893]: I0128 15:31:35.298273 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:31:35 crc kubenswrapper[4893]: I0128 15:31:35.308672 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:31:35 crc kubenswrapper[4893]: E0128 15:31:35.457935 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:31:35 crc kubenswrapper[4893]: E0128 15:31:35.517164 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:31:35 crc kubenswrapper[4893]: E0128 15:31:35.542632 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:31:35 crc kubenswrapper[4893]: E0128 15:31:35.542697 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="32c33a83-8802-40c5-94ac-8943e8e5df5f" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:31:35 crc kubenswrapper[4893]: I0128 15:31:35.940978 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" Jan 28 15:31:35 crc kubenswrapper[4893]: I0128 15:31:35.948955 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" Jan 28 15:31:35 crc kubenswrapper[4893]: I0128 15:31:35.957342 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.044541 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krznl\" (UniqueName: \"kubernetes.io/projected/5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b-kube-api-access-krznl\") pod \"5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b\" (UID: \"5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b\") " Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.044663 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b-operator-scripts\") pod \"5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b\" (UID: \"5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b\") " Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.044706 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba55229e-1b8b-4f31-8b36-cf087710bd12-operator-scripts\") pod \"ba55229e-1b8b-4f31-8b36-cf087710bd12\" (UID: \"ba55229e-1b8b-4f31-8b36-cf087710bd12\") " Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.044750 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h59fg\" (UniqueName: \"kubernetes.io/projected/3bd88a0d-84a3-462a-b7fc-0c4dea75ed65-kube-api-access-h59fg\") pod \"3bd88a0d-84a3-462a-b7fc-0c4dea75ed65\" (UID: \"3bd88a0d-84a3-462a-b7fc-0c4dea75ed65\") " Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.044850 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd88a0d-84a3-462a-b7fc-0c4dea75ed65-operator-scripts\") pod \"3bd88a0d-84a3-462a-b7fc-0c4dea75ed65\" (UID: \"3bd88a0d-84a3-462a-b7fc-0c4dea75ed65\") " Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.044915 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc8vn\" (UniqueName: \"kubernetes.io/projected/ba55229e-1b8b-4f31-8b36-cf087710bd12-kube-api-access-fc8vn\") pod \"ba55229e-1b8b-4f31-8b36-cf087710bd12\" (UID: \"ba55229e-1b8b-4f31-8b36-cf087710bd12\") " Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.045880 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd88a0d-84a3-462a-b7fc-0c4dea75ed65-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bd88a0d-84a3-462a-b7fc-0c4dea75ed65" (UID: "3bd88a0d-84a3-462a-b7fc-0c4dea75ed65"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.045901 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b" (UID: "5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.046037 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba55229e-1b8b-4f31-8b36-cf087710bd12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba55229e-1b8b-4f31-8b36-cf087710bd12" (UID: "ba55229e-1b8b-4f31-8b36-cf087710bd12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.048592 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd88a0d-84a3-462a-b7fc-0c4dea75ed65-kube-api-access-h59fg" (OuterVolumeSpecName: "kube-api-access-h59fg") pod "3bd88a0d-84a3-462a-b7fc-0c4dea75ed65" (UID: "3bd88a0d-84a3-462a-b7fc-0c4dea75ed65"). InnerVolumeSpecName "kube-api-access-h59fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.048651 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba55229e-1b8b-4f31-8b36-cf087710bd12-kube-api-access-fc8vn" (OuterVolumeSpecName: "kube-api-access-fc8vn") pod "ba55229e-1b8b-4f31-8b36-cf087710bd12" (UID: "ba55229e-1b8b-4f31-8b36-cf087710bd12"). InnerVolumeSpecName "kube-api-access-fc8vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.048752 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b-kube-api-access-krznl" (OuterVolumeSpecName: "kube-api-access-krznl") pod "5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b" (UID: "5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b"). InnerVolumeSpecName "kube-api-access-krznl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.146881 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krznl\" (UniqueName: \"kubernetes.io/projected/5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b-kube-api-access-krznl\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.146927 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.146938 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba55229e-1b8b-4f31-8b36-cf087710bd12-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.146949 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h59fg\" (UniqueName: \"kubernetes.io/projected/3bd88a0d-84a3-462a-b7fc-0c4dea75ed65-kube-api-access-h59fg\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.146958 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd88a0d-84a3-462a-b7fc-0c4dea75ed65-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.146992 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc8vn\" (UniqueName: \"kubernetes.io/projected/ba55229e-1b8b-4f31-8b36-cf087710bd12-kube-api-access-fc8vn\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.277208 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.277216 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0d7da-account-delete-8qlsp" event={"ID":"3bd88a0d-84a3-462a-b7fc-0c4dea75ed65","Type":"ContainerDied","Data":"89a1348853585da8c91bd4daa52a6fc43d723aeeb0ff4e878f6eef68545d7cfa"} Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.277345 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89a1348853585da8c91bd4daa52a6fc43d723aeeb0ff4e878f6eef68545d7cfa" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.280090 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.280341 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi9f40-account-delete-48tdh" event={"ID":"ba55229e-1b8b-4f31-8b36-cf087710bd12","Type":"ContainerDied","Data":"0830c3458a6372e752a2a138d0597bf59ecb972bb72c6e74f65549e2779a06b9"} Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.280375 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0830c3458a6372e752a2a138d0597bf59ecb972bb72c6e74f65549e2779a06b9" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.281848 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" event={"ID":"5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b","Type":"ContainerDied","Data":"f05fdd696f6f516fae22806afb9a23697104b4eb30c4aad3e5e5fe61357aa51f"} Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.281871 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f05fdd696f6f516fae22806afb9a23697104b4eb30c4aad3e5e5fe61357aa51f" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.281910 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1b9a3-account-delete-lhppw" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.701295 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-lcqjt"] Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.709088 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-lcqjt"] Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.715073 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell0d7da-account-delete-8qlsp"] Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.721610 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b"] Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.727606 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell0d7da-account-delete-8qlsp"] Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.735890 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-d7da-account-create-update-lt48b"] Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.911845 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0671435f-14c4-40d2-8af9-173b53e986e6" path="/var/lib/kubelet/pods/0671435f-14c4-40d2-8af9-173b53e986e6/volumes" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.914845 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd88a0d-84a3-462a-b7fc-0c4dea75ed65" path="/var/lib/kubelet/pods/3bd88a0d-84a3-462a-b7fc-0c4dea75ed65/volumes" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.915879 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a" path="/var/lib/kubelet/pods/4b9449df-d6ab-4ed3-a68c-bfe73a3ba35a/volumes" Jan 28 15:31:36 crc kubenswrapper[4893]: I0128 15:31:36.917018 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9acab649-6a00-44d1-ab58-501d4059248c" path="/var/lib/kubelet/pods/9acab649-6a00-44d1-ab58-501d4059248c/volumes" Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.160272 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.222137 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvnpq\" (UniqueName: \"kubernetes.io/projected/32c33a83-8802-40c5-94ac-8943e8e5df5f-kube-api-access-vvnpq\") pod \"32c33a83-8802-40c5-94ac-8943e8e5df5f\" (UID: \"32c33a83-8802-40c5-94ac-8943e8e5df5f\") " Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.222172 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c33a83-8802-40c5-94ac-8943e8e5df5f-config-data\") pod \"32c33a83-8802-40c5-94ac-8943e8e5df5f\" (UID: \"32c33a83-8802-40c5-94ac-8943e8e5df5f\") " Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.227683 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32c33a83-8802-40c5-94ac-8943e8e5df5f-kube-api-access-vvnpq" (OuterVolumeSpecName: "kube-api-access-vvnpq") pod "32c33a83-8802-40c5-94ac-8943e8e5df5f" (UID: "32c33a83-8802-40c5-94ac-8943e8e5df5f"). InnerVolumeSpecName "kube-api-access-vvnpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.248519 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c33a83-8802-40c5-94ac-8943e8e5df5f-config-data" (OuterVolumeSpecName: "config-data") pod "32c33a83-8802-40c5-94ac-8943e8e5df5f" (UID: "32c33a83-8802-40c5-94ac-8943e8e5df5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.316850 4893 generic.go:334] "Generic (PLEG): container finished" podID="32c33a83-8802-40c5-94ac-8943e8e5df5f" containerID="70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b" exitCode=0 Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.316944 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.316961 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"32c33a83-8802-40c5-94ac-8943e8e5df5f","Type":"ContainerDied","Data":"70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b"} Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.317583 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"32c33a83-8802-40c5-94ac-8943e8e5df5f","Type":"ContainerDied","Data":"d3e473e3c650f08b27eec5006cb6962d57b6557e62c893abeb35e8f7cf14a45c"} Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.317618 4893 scope.go:117] "RemoveContainer" containerID="70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b" Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.323529 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvnpq\" (UniqueName: \"kubernetes.io/projected/32c33a83-8802-40c5-94ac-8943e8e5df5f-kube-api-access-vvnpq\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.323566 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c33a83-8802-40c5-94ac-8943e8e5df5f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.338532 4893 scope.go:117] "RemoveContainer" containerID="70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b" Jan 28 15:31:40 crc kubenswrapper[4893]: E0128 15:31:40.340323 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b\": container with ID starting with 70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b not found: ID does not exist" containerID="70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b" Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.340366 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b"} err="failed to get container status \"70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b\": rpc error: code = NotFound desc = could not find container \"70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b\": container with ID starting with 70a9d9ffeb569b81c3bb7111c5b86e19a566f35ca088a26fb604dc1e5e3fdb6b not found: ID does not exist" Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.357187 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.364731 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:31:40 crc kubenswrapper[4893]: I0128 15:31:40.901345 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32c33a83-8802-40c5-94ac-8943e8e5df5f" path="/var/lib/kubelet/pods/32c33a83-8802-40c5-94ac-8943e8e5df5f/volumes" Jan 28 15:31:41 crc kubenswrapper[4893]: I0128 15:31:41.817028 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-s7sfs"] Jan 28 15:31:41 crc kubenswrapper[4893]: I0128 15:31:41.826902 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-s7sfs"] Jan 28 15:31:41 crc kubenswrapper[4893]: I0128 15:31:41.848010 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapi9f40-account-delete-48tdh"] Jan 28 15:31:41 crc kubenswrapper[4893]: I0128 15:31:41.874517 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapi9f40-account-delete-48tdh"] Jan 28 15:31:41 crc kubenswrapper[4893]: I0128 15:31:41.889921 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-9f40-account-create-update-jlpld"] Jan 28 15:31:41 crc kubenswrapper[4893]: I0128 15:31:41.897540 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-9f40-account-create-update-jlpld"] Jan 28 15:31:41 crc kubenswrapper[4893]: I0128 15:31:41.920266 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-2gr65"] Jan 28 15:31:41 crc kubenswrapper[4893]: I0128 15:31:41.926845 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-2gr65"] Jan 28 15:31:41 crc kubenswrapper[4893]: I0128 15:31:41.944433 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell1b9a3-account-delete-lhppw"] Jan 28 15:31:41 crc kubenswrapper[4893]: I0128 15:31:41.955867 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell1b9a3-account-delete-lhppw"] Jan 28 15:31:41 crc kubenswrapper[4893]: I0128 15:31:41.964540 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz"] Jan 28 15:31:41 crc kubenswrapper[4893]: I0128 15:31:41.974336 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-b9a3-account-create-update-8sbdz"] Jan 28 15:31:42 crc kubenswrapper[4893]: I0128 15:31:42.089751 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf"] Jan 28 15:31:42 crc kubenswrapper[4893]: I0128 15:31:42.097079 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-xclqf"] Jan 28 15:31:42 crc kubenswrapper[4893]: I0128 15:31:42.147385 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk"] Jan 28 15:31:42 crc kubenswrapper[4893]: I0128 15:31:42.155321 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-scjhk"] Jan 28 15:31:42 crc kubenswrapper[4893]: I0128 15:31:42.900770 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef8b37b-ceed-44d3-9d50-f713684f2b04" path="/var/lib/kubelet/pods/0ef8b37b-ceed-44d3-9d50-f713684f2b04/volumes" Jan 28 15:31:42 crc kubenswrapper[4893]: I0128 15:31:42.901256 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c3aa8f2-d928-410e-b3b4-57c85bba4490" path="/var/lib/kubelet/pods/3c3aa8f2-d928-410e-b3b4-57c85bba4490/volumes" Jan 28 15:31:42 crc kubenswrapper[4893]: I0128 15:31:42.902505 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b" path="/var/lib/kubelet/pods/5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b/volumes" Jan 28 15:31:42 crc kubenswrapper[4893]: I0128 15:31:42.903377 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba55229e-1b8b-4f31-8b36-cf087710bd12" path="/var/lib/kubelet/pods/ba55229e-1b8b-4f31-8b36-cf087710bd12/volumes" Jan 28 15:31:42 crc kubenswrapper[4893]: I0128 15:31:42.904037 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5122cff-317d-492a-876b-f13a62d6e1db" path="/var/lib/kubelet/pods/d5122cff-317d-492a-876b-f13a62d6e1db/volumes" Jan 28 15:31:42 crc kubenswrapper[4893]: I0128 15:31:42.905341 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfc40127-55a9-4d65-9271-5b4b5d48473d" path="/var/lib/kubelet/pods/dfc40127-55a9-4d65-9271-5b4b5d48473d/volumes" Jan 28 15:31:42 crc kubenswrapper[4893]: I0128 15:31:42.906871 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e57926c0-c91a-4479-9440-de28827aa98f" path="/var/lib/kubelet/pods/e57926c0-c91a-4479-9440-de28827aa98f/volumes" Jan 28 15:31:42 crc kubenswrapper[4893]: I0128 15:31:42.907726 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed8525f9-d3bf-452b-bd30-a60e65e32d7d" path="/var/lib/kubelet/pods/ed8525f9-d3bf-452b-bd30-a60e65e32d7d/volumes" Jan 28 15:31:43 crc kubenswrapper[4893]: I0128 15:31:43.892445 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:31:43 crc kubenswrapper[4893]: E0128 15:31:43.893001 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.813963 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-jk86f"] Jan 28 15:31:44 crc kubenswrapper[4893]: E0128 15:31:44.814349 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9acab649-6a00-44d1-ab58-501d4059248c" containerName="nova-kuttl-metadata-log" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814367 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="9acab649-6a00-44d1-ab58-501d4059248c" containerName="nova-kuttl-metadata-log" Jan 28 15:31:44 crc kubenswrapper[4893]: E0128 15:31:44.814379 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd88a0d-84a3-462a-b7fc-0c4dea75ed65" containerName="mariadb-account-delete" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814389 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd88a0d-84a3-462a-b7fc-0c4dea75ed65" containerName="mariadb-account-delete" Jan 28 15:31:44 crc kubenswrapper[4893]: E0128 15:31:44.814421 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814432 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:31:44 crc kubenswrapper[4893]: E0128 15:31:44.814442 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba55229e-1b8b-4f31-8b36-cf087710bd12" containerName="mariadb-account-delete" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814449 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba55229e-1b8b-4f31-8b36-cf087710bd12" containerName="mariadb-account-delete" Jan 28 15:31:44 crc kubenswrapper[4893]: E0128 15:31:44.814460 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b" containerName="mariadb-account-delete" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814467 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b" containerName="mariadb-account-delete" Jan 28 15:31:44 crc kubenswrapper[4893]: E0128 15:31:44.814497 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32c33a83-8802-40c5-94ac-8943e8e5df5f" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814505 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="32c33a83-8802-40c5-94ac-8943e8e5df5f" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:31:44 crc kubenswrapper[4893]: E0128 15:31:44.814515 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9acab649-6a00-44d1-ab58-501d4059248c" containerName="nova-kuttl-metadata-metadata" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814523 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="9acab649-6a00-44d1-ab58-501d4059248c" containerName="nova-kuttl-metadata-metadata" Jan 28 15:31:44 crc kubenswrapper[4893]: E0128 15:31:44.814556 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4152d8-cd1c-478b-977c-3542b4ccf601" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814564 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4152d8-cd1c-478b-977c-3542b4ccf601" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814759 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="9acab649-6a00-44d1-ab58-501d4059248c" containerName="nova-kuttl-metadata-metadata" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814776 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd88a0d-84a3-462a-b7fc-0c4dea75ed65" containerName="mariadb-account-delete" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814791 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba55229e-1b8b-4f31-8b36-cf087710bd12" containerName="mariadb-account-delete" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814801 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a4152d8-cd1c-478b-977c-3542b4ccf601" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814812 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bc5e9cc-3e5a-40dd-a8da-dfc5100fa12f" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814827 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be5ecfd-fbf3-46b2-9f61-c961f4a26b9b" containerName="mariadb-account-delete" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814841 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="32c33a83-8802-40c5-94ac-8943e8e5df5f" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.814856 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="9acab649-6a00-44d1-ab58-501d4059248c" containerName="nova-kuttl-metadata-log" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.815545 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-jk86f" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.834579 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-jk86f"] Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.902392 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99a38c89-ae5a-4e48-8816-423ce2312cc0-operator-scripts\") pod \"nova-api-db-create-jk86f\" (UID: \"99a38c89-ae5a-4e48-8816-423ce2312cc0\") " pod="nova-kuttl-default/nova-api-db-create-jk86f" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.902486 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8v76\" (UniqueName: \"kubernetes.io/projected/99a38c89-ae5a-4e48-8816-423ce2312cc0-kube-api-access-w8v76\") pod \"nova-api-db-create-jk86f\" (UID: \"99a38c89-ae5a-4e48-8816-423ce2312cc0\") " pod="nova-kuttl-default/nova-api-db-create-jk86f" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.920749 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-n4ks8"] Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.922324 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-n4ks8" Jan 28 15:31:44 crc kubenswrapper[4893]: I0128 15:31:44.936084 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-n4ks8"] Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.003987 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8v76\" (UniqueName: \"kubernetes.io/projected/99a38c89-ae5a-4e48-8816-423ce2312cc0-kube-api-access-w8v76\") pod \"nova-api-db-create-jk86f\" (UID: \"99a38c89-ae5a-4e48-8816-423ce2312cc0\") " pod="nova-kuttl-default/nova-api-db-create-jk86f" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.004798 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99a38c89-ae5a-4e48-8816-423ce2312cc0-operator-scripts\") pod \"nova-api-db-create-jk86f\" (UID: \"99a38c89-ae5a-4e48-8816-423ce2312cc0\") " pod="nova-kuttl-default/nova-api-db-create-jk86f" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.005417 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99a38c89-ae5a-4e48-8816-423ce2312cc0-operator-scripts\") pod \"nova-api-db-create-jk86f\" (UID: \"99a38c89-ae5a-4e48-8816-423ce2312cc0\") " pod="nova-kuttl-default/nova-api-db-create-jk86f" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.019304 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm"] Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.020414 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.025453 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.029567 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8v76\" (UniqueName: \"kubernetes.io/projected/99a38c89-ae5a-4e48-8816-423ce2312cc0-kube-api-access-w8v76\") pod \"nova-api-db-create-jk86f\" (UID: \"99a38c89-ae5a-4e48-8816-423ce2312cc0\") " pod="nova-kuttl-default/nova-api-db-create-jk86f" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.032766 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm"] Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.107061 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6gp9\" (UniqueName: \"kubernetes.io/projected/45994969-6957-49cd-95cc-3da11b3f8a53-kube-api-access-p6gp9\") pod \"nova-api-3b1a-account-create-update-r22sm\" (UID: \"45994969-6957-49cd-95cc-3da11b3f8a53\") " pod="nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.107411 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x45wg\" (UniqueName: \"kubernetes.io/projected/410a2c70-a715-4d47-a056-ff7d2ca6e79f-kube-api-access-x45wg\") pod \"nova-cell0-db-create-n4ks8\" (UID: \"410a2c70-a715-4d47-a056-ff7d2ca6e79f\") " pod="nova-kuttl-default/nova-cell0-db-create-n4ks8" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.107439 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410a2c70-a715-4d47-a056-ff7d2ca6e79f-operator-scripts\") pod \"nova-cell0-db-create-n4ks8\" (UID: \"410a2c70-a715-4d47-a056-ff7d2ca6e79f\") " pod="nova-kuttl-default/nova-cell0-db-create-n4ks8" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.107532 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45994969-6957-49cd-95cc-3da11b3f8a53-operator-scripts\") pod \"nova-api-3b1a-account-create-update-r22sm\" (UID: \"45994969-6957-49cd-95cc-3da11b3f8a53\") " pod="nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.121164 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-9sqgr"] Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.122619 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-9sqgr" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.128632 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-9sqgr"] Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.143580 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-jk86f" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.209153 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6gp9\" (UniqueName: \"kubernetes.io/projected/45994969-6957-49cd-95cc-3da11b3f8a53-kube-api-access-p6gp9\") pod \"nova-api-3b1a-account-create-update-r22sm\" (UID: \"45994969-6957-49cd-95cc-3da11b3f8a53\") " pod="nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.209211 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x45wg\" (UniqueName: \"kubernetes.io/projected/410a2c70-a715-4d47-a056-ff7d2ca6e79f-kube-api-access-x45wg\") pod \"nova-cell0-db-create-n4ks8\" (UID: \"410a2c70-a715-4d47-a056-ff7d2ca6e79f\") " pod="nova-kuttl-default/nova-cell0-db-create-n4ks8" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.209245 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410a2c70-a715-4d47-a056-ff7d2ca6e79f-operator-scripts\") pod \"nova-cell0-db-create-n4ks8\" (UID: \"410a2c70-a715-4d47-a056-ff7d2ca6e79f\") " pod="nova-kuttl-default/nova-cell0-db-create-n4ks8" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.209276 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649rf\" (UniqueName: \"kubernetes.io/projected/3a62af92-8f89-4800-9724-c651058a0cf2-kube-api-access-649rf\") pod \"nova-cell1-db-create-9sqgr\" (UID: \"3a62af92-8f89-4800-9724-c651058a0cf2\") " pod="nova-kuttl-default/nova-cell1-db-create-9sqgr" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.209371 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45994969-6957-49cd-95cc-3da11b3f8a53-operator-scripts\") pod \"nova-api-3b1a-account-create-update-r22sm\" (UID: \"45994969-6957-49cd-95cc-3da11b3f8a53\") " pod="nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.209424 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a62af92-8f89-4800-9724-c651058a0cf2-operator-scripts\") pod \"nova-cell1-db-create-9sqgr\" (UID: \"3a62af92-8f89-4800-9724-c651058a0cf2\") " pod="nova-kuttl-default/nova-cell1-db-create-9sqgr" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.212785 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410a2c70-a715-4d47-a056-ff7d2ca6e79f-operator-scripts\") pod \"nova-cell0-db-create-n4ks8\" (UID: \"410a2c70-a715-4d47-a056-ff7d2ca6e79f\") " pod="nova-kuttl-default/nova-cell0-db-create-n4ks8" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.213065 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45994969-6957-49cd-95cc-3da11b3f8a53-operator-scripts\") pod \"nova-api-3b1a-account-create-update-r22sm\" (UID: \"45994969-6957-49cd-95cc-3da11b3f8a53\") " pod="nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.228940 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj"] Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.230332 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.239355 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6gp9\" (UniqueName: \"kubernetes.io/projected/45994969-6957-49cd-95cc-3da11b3f8a53-kube-api-access-p6gp9\") pod \"nova-api-3b1a-account-create-update-r22sm\" (UID: \"45994969-6957-49cd-95cc-3da11b3f8a53\") " pod="nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.240584 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.257818 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x45wg\" (UniqueName: \"kubernetes.io/projected/410a2c70-a715-4d47-a056-ff7d2ca6e79f-kube-api-access-x45wg\") pod \"nova-cell0-db-create-n4ks8\" (UID: \"410a2c70-a715-4d47-a056-ff7d2ca6e79f\") " pod="nova-kuttl-default/nova-cell0-db-create-n4ks8" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.256004 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj"] Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.316634 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a62af92-8f89-4800-9724-c651058a0cf2-operator-scripts\") pod \"nova-cell1-db-create-9sqgr\" (UID: \"3a62af92-8f89-4800-9724-c651058a0cf2\") " pod="nova-kuttl-default/nova-cell1-db-create-9sqgr" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.316842 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-649rf\" (UniqueName: \"kubernetes.io/projected/3a62af92-8f89-4800-9724-c651058a0cf2-kube-api-access-649rf\") pod \"nova-cell1-db-create-9sqgr\" (UID: \"3a62af92-8f89-4800-9724-c651058a0cf2\") " pod="nova-kuttl-default/nova-cell1-db-create-9sqgr" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.318111 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a62af92-8f89-4800-9724-c651058a0cf2-operator-scripts\") pod \"nova-cell1-db-create-9sqgr\" (UID: \"3a62af92-8f89-4800-9724-c651058a0cf2\") " pod="nova-kuttl-default/nova-cell1-db-create-9sqgr" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.340995 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-649rf\" (UniqueName: \"kubernetes.io/projected/3a62af92-8f89-4800-9724-c651058a0cf2-kube-api-access-649rf\") pod \"nova-cell1-db-create-9sqgr\" (UID: \"3a62af92-8f89-4800-9724-c651058a0cf2\") " pod="nova-kuttl-default/nova-cell1-db-create-9sqgr" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.408850 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.427052 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skhjg\" (UniqueName: \"kubernetes.io/projected/64828696-910a-4780-90f7-7022cb08c19f-kube-api-access-skhjg\") pod \"nova-cell0-558f-account-create-update-hgksj\" (UID: \"64828696-910a-4780-90f7-7022cb08c19f\") " pod="nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.427224 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64828696-910a-4780-90f7-7022cb08c19f-operator-scripts\") pod \"nova-cell0-558f-account-create-update-hgksj\" (UID: \"64828696-910a-4780-90f7-7022cb08c19f\") " pod="nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.445604 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-9sqgr" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.448067 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d"] Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.450630 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.453766 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.477675 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d"] Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.529322 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skhjg\" (UniqueName: \"kubernetes.io/projected/64828696-910a-4780-90f7-7022cb08c19f-kube-api-access-skhjg\") pod \"nova-cell0-558f-account-create-update-hgksj\" (UID: \"64828696-910a-4780-90f7-7022cb08c19f\") " pod="nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.529698 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64828696-910a-4780-90f7-7022cb08c19f-operator-scripts\") pod \"nova-cell0-558f-account-create-update-hgksj\" (UID: \"64828696-910a-4780-90f7-7022cb08c19f\") " pod="nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.530888 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64828696-910a-4780-90f7-7022cb08c19f-operator-scripts\") pod \"nova-cell0-558f-account-create-update-hgksj\" (UID: \"64828696-910a-4780-90f7-7022cb08c19f\") " pod="nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.539319 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-n4ks8" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.549131 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skhjg\" (UniqueName: \"kubernetes.io/projected/64828696-910a-4780-90f7-7022cb08c19f-kube-api-access-skhjg\") pod \"nova-cell0-558f-account-create-update-hgksj\" (UID: \"64828696-910a-4780-90f7-7022cb08c19f\") " pod="nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.630456 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.631606 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn7hr\" (UniqueName: \"kubernetes.io/projected/3bf82a69-6a29-4c98-8e72-1d4f4a73edda-kube-api-access-gn7hr\") pod \"nova-cell1-82e7-account-create-update-ls97d\" (UID: \"3bf82a69-6a29-4c98-8e72-1d4f4a73edda\") " pod="nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.631669 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf82a69-6a29-4c98-8e72-1d4f4a73edda-operator-scripts\") pod \"nova-cell1-82e7-account-create-update-ls97d\" (UID: \"3bf82a69-6a29-4c98-8e72-1d4f4a73edda\") " pod="nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.725289 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-jk86f"] Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.735895 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn7hr\" (UniqueName: \"kubernetes.io/projected/3bf82a69-6a29-4c98-8e72-1d4f4a73edda-kube-api-access-gn7hr\") pod \"nova-cell1-82e7-account-create-update-ls97d\" (UID: \"3bf82a69-6a29-4c98-8e72-1d4f4a73edda\") " pod="nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.735975 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf82a69-6a29-4c98-8e72-1d4f4a73edda-operator-scripts\") pod \"nova-cell1-82e7-account-create-update-ls97d\" (UID: \"3bf82a69-6a29-4c98-8e72-1d4f4a73edda\") " pod="nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.737431 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf82a69-6a29-4c98-8e72-1d4f4a73edda-operator-scripts\") pod \"nova-cell1-82e7-account-create-update-ls97d\" (UID: \"3bf82a69-6a29-4c98-8e72-1d4f4a73edda\") " pod="nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.765749 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn7hr\" (UniqueName: \"kubernetes.io/projected/3bf82a69-6a29-4c98-8e72-1d4f4a73edda-kube-api-access-gn7hr\") pod \"nova-cell1-82e7-account-create-update-ls97d\" (UID: \"3bf82a69-6a29-4c98-8e72-1d4f4a73edda\") " pod="nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.780567 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d" Jan 28 15:31:45 crc kubenswrapper[4893]: I0128 15:31:45.969953 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm"] Jan 28 15:31:46 crc kubenswrapper[4893]: I0128 15:31:46.049637 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-9sqgr"] Jan 28 15:31:46 crc kubenswrapper[4893]: W0128 15:31:46.074768 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a62af92_8f89_4800_9724_c651058a0cf2.slice/crio-b40d1a41988ec8a2d3476064e506a661bdef681b16cbb4df39ad0b87d7284809 WatchSource:0}: Error finding container b40d1a41988ec8a2d3476064e506a661bdef681b16cbb4df39ad0b87d7284809: Status 404 returned error can't find the container with id b40d1a41988ec8a2d3476064e506a661bdef681b16cbb4df39ad0b87d7284809 Jan 28 15:31:46 crc kubenswrapper[4893]: I0128 15:31:46.153816 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-n4ks8"] Jan 28 15:31:46 crc kubenswrapper[4893]: I0128 15:31:46.248779 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj"] Jan 28 15:31:46 crc kubenswrapper[4893]: W0128 15:31:46.326711 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64828696_910a_4780_90f7_7022cb08c19f.slice/crio-51ff38a6fd001c730ac63e8dcaa0912a5dc8818ad97a718c9767456f96a3190b WatchSource:0}: Error finding container 51ff38a6fd001c730ac63e8dcaa0912a5dc8818ad97a718c9767456f96a3190b: Status 404 returned error can't find the container with id 51ff38a6fd001c730ac63e8dcaa0912a5dc8818ad97a718c9767456f96a3190b Jan 28 15:31:46 crc kubenswrapper[4893]: I0128 15:31:46.371332 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d"] Jan 28 15:31:46 crc kubenswrapper[4893]: I0128 15:31:46.372897 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-9sqgr" event={"ID":"3a62af92-8f89-4800-9724-c651058a0cf2","Type":"ContainerStarted","Data":"b40d1a41988ec8a2d3476064e506a661bdef681b16cbb4df39ad0b87d7284809"} Jan 28 15:31:46 crc kubenswrapper[4893]: I0128 15:31:46.376290 4893 generic.go:334] "Generic (PLEG): container finished" podID="99a38c89-ae5a-4e48-8816-423ce2312cc0" containerID="ecd138730c777d575b3107add31046c2cf963d2f743399215d0c1bb44c20c7fd" exitCode=0 Jan 28 15:31:46 crc kubenswrapper[4893]: I0128 15:31:46.376354 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-jk86f" event={"ID":"99a38c89-ae5a-4e48-8816-423ce2312cc0","Type":"ContainerDied","Data":"ecd138730c777d575b3107add31046c2cf963d2f743399215d0c1bb44c20c7fd"} Jan 28 15:31:46 crc kubenswrapper[4893]: I0128 15:31:46.376378 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-jk86f" event={"ID":"99a38c89-ae5a-4e48-8816-423ce2312cc0","Type":"ContainerStarted","Data":"14ca93ebf9ecd6a5dbd47d27d5b955308f3c3266dcc9aeb07c2d0061863c9d0d"} Jan 28 15:31:46 crc kubenswrapper[4893]: I0128 15:31:46.377812 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm" event={"ID":"45994969-6957-49cd-95cc-3da11b3f8a53","Type":"ContainerStarted","Data":"392eee7f0ca217191789ac51d110538f0bb2ec7cb037126685ca1fd0d029d474"} Jan 28 15:31:46 crc kubenswrapper[4893]: I0128 15:31:46.379505 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-n4ks8" event={"ID":"410a2c70-a715-4d47-a056-ff7d2ca6e79f","Type":"ContainerStarted","Data":"cb613d2f7bebb12947561e090e22dc823168c5709b8c5a81229f2324fde2e7d1"} Jan 28 15:31:46 crc kubenswrapper[4893]: I0128 15:31:46.382726 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj" event={"ID":"64828696-910a-4780-90f7-7022cb08c19f","Type":"ContainerStarted","Data":"51ff38a6fd001c730ac63e8dcaa0912a5dc8818ad97a718c9767456f96a3190b"} Jan 28 15:31:46 crc kubenswrapper[4893]: W0128 15:31:46.384132 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bf82a69_6a29_4c98_8e72_1d4f4a73edda.slice/crio-b8bf9a49317ddbd3573ecde8c1bb96d7adb9cebcf5b6b035730d223a68390d1d WatchSource:0}: Error finding container b8bf9a49317ddbd3573ecde8c1bb96d7adb9cebcf5b6b035730d223a68390d1d: Status 404 returned error can't find the container with id b8bf9a49317ddbd3573ecde8c1bb96d7adb9cebcf5b6b035730d223a68390d1d Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.393659 4893 generic.go:334] "Generic (PLEG): container finished" podID="3bf82a69-6a29-4c98-8e72-1d4f4a73edda" containerID="7c1b2c4d8c0d7129c7b00b10d31b549824cea598aa25f027600826dc9d0bc3ed" exitCode=0 Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.393790 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d" event={"ID":"3bf82a69-6a29-4c98-8e72-1d4f4a73edda","Type":"ContainerDied","Data":"7c1b2c4d8c0d7129c7b00b10d31b549824cea598aa25f027600826dc9d0bc3ed"} Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.394864 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d" event={"ID":"3bf82a69-6a29-4c98-8e72-1d4f4a73edda","Type":"ContainerStarted","Data":"b8bf9a49317ddbd3573ecde8c1bb96d7adb9cebcf5b6b035730d223a68390d1d"} Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.396676 4893 generic.go:334] "Generic (PLEG): container finished" podID="45994969-6957-49cd-95cc-3da11b3f8a53" containerID="e278042b5d30c73d8d779b41754f50c08b6f7213039453987843d28100a2907e" exitCode=0 Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.396777 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm" event={"ID":"45994969-6957-49cd-95cc-3da11b3f8a53","Type":"ContainerDied","Data":"e278042b5d30c73d8d779b41754f50c08b6f7213039453987843d28100a2907e"} Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.400067 4893 generic.go:334] "Generic (PLEG): container finished" podID="410a2c70-a715-4d47-a056-ff7d2ca6e79f" containerID="330b3897c89ddc89fd844eb0f1f66171322e6c54b44fd00cfe00e20c2f9a7987" exitCode=0 Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.400108 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-n4ks8" event={"ID":"410a2c70-a715-4d47-a056-ff7d2ca6e79f","Type":"ContainerDied","Data":"330b3897c89ddc89fd844eb0f1f66171322e6c54b44fd00cfe00e20c2f9a7987"} Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.402039 4893 generic.go:334] "Generic (PLEG): container finished" podID="64828696-910a-4780-90f7-7022cb08c19f" containerID="9e309ea66e2f9e8d9137ef99df5f9ec42b132b30ecbd2f3d341d41b751ffffa4" exitCode=0 Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.402098 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj" event={"ID":"64828696-910a-4780-90f7-7022cb08c19f","Type":"ContainerDied","Data":"9e309ea66e2f9e8d9137ef99df5f9ec42b132b30ecbd2f3d341d41b751ffffa4"} Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.403692 4893 generic.go:334] "Generic (PLEG): container finished" podID="3a62af92-8f89-4800-9724-c651058a0cf2" containerID="be5e36c8292480ef4d7e345dd4444f028fc0b36d91c9baeedd65a2cf266f70b7" exitCode=0 Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.403769 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-9sqgr" event={"ID":"3a62af92-8f89-4800-9724-c651058a0cf2","Type":"ContainerDied","Data":"be5e36c8292480ef4d7e345dd4444f028fc0b36d91c9baeedd65a2cf266f70b7"} Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.779496 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-jk86f" Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.874334 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8v76\" (UniqueName: \"kubernetes.io/projected/99a38c89-ae5a-4e48-8816-423ce2312cc0-kube-api-access-w8v76\") pod \"99a38c89-ae5a-4e48-8816-423ce2312cc0\" (UID: \"99a38c89-ae5a-4e48-8816-423ce2312cc0\") " Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.874665 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99a38c89-ae5a-4e48-8816-423ce2312cc0-operator-scripts\") pod \"99a38c89-ae5a-4e48-8816-423ce2312cc0\" (UID: \"99a38c89-ae5a-4e48-8816-423ce2312cc0\") " Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.875527 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a38c89-ae5a-4e48-8816-423ce2312cc0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99a38c89-ae5a-4e48-8816-423ce2312cc0" (UID: "99a38c89-ae5a-4e48-8816-423ce2312cc0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.886885 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99a38c89-ae5a-4e48-8816-423ce2312cc0-kube-api-access-w8v76" (OuterVolumeSpecName: "kube-api-access-w8v76") pod "99a38c89-ae5a-4e48-8816-423ce2312cc0" (UID: "99a38c89-ae5a-4e48-8816-423ce2312cc0"). InnerVolumeSpecName "kube-api-access-w8v76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.976836 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8v76\" (UniqueName: \"kubernetes.io/projected/99a38c89-ae5a-4e48-8816-423ce2312cc0-kube-api-access-w8v76\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:47 crc kubenswrapper[4893]: I0128 15:31:47.976867 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99a38c89-ae5a-4e48-8816-423ce2312cc0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:48 crc kubenswrapper[4893]: I0128 15:31:48.417923 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-jk86f" event={"ID":"99a38c89-ae5a-4e48-8816-423ce2312cc0","Type":"ContainerDied","Data":"14ca93ebf9ecd6a5dbd47d27d5b955308f3c3266dcc9aeb07c2d0061863c9d0d"} Jan 28 15:31:48 crc kubenswrapper[4893]: I0128 15:31:48.418006 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14ca93ebf9ecd6a5dbd47d27d5b955308f3c3266dcc9aeb07c2d0061863c9d0d" Jan 28 15:31:48 crc kubenswrapper[4893]: I0128 15:31:48.418050 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-jk86f" Jan 28 15:31:48 crc kubenswrapper[4893]: I0128 15:31:48.719543 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj" Jan 28 15:31:48 crc kubenswrapper[4893]: I0128 15:31:48.787458 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skhjg\" (UniqueName: \"kubernetes.io/projected/64828696-910a-4780-90f7-7022cb08c19f-kube-api-access-skhjg\") pod \"64828696-910a-4780-90f7-7022cb08c19f\" (UID: \"64828696-910a-4780-90f7-7022cb08c19f\") " Jan 28 15:31:48 crc kubenswrapper[4893]: I0128 15:31:48.787725 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64828696-910a-4780-90f7-7022cb08c19f-operator-scripts\") pod \"64828696-910a-4780-90f7-7022cb08c19f\" (UID: \"64828696-910a-4780-90f7-7022cb08c19f\") " Jan 28 15:31:48 crc kubenswrapper[4893]: I0128 15:31:48.788464 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64828696-910a-4780-90f7-7022cb08c19f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64828696-910a-4780-90f7-7022cb08c19f" (UID: "64828696-910a-4780-90f7-7022cb08c19f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:31:48 crc kubenswrapper[4893]: I0128 15:31:48.790947 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64828696-910a-4780-90f7-7022cb08c19f-kube-api-access-skhjg" (OuterVolumeSpecName: "kube-api-access-skhjg") pod "64828696-910a-4780-90f7-7022cb08c19f" (UID: "64828696-910a-4780-90f7-7022cb08c19f"). InnerVolumeSpecName "kube-api-access-skhjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:48 crc kubenswrapper[4893]: I0128 15:31:48.870688 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d" Jan 28 15:31:48 crc kubenswrapper[4893]: I0128 15:31:48.892882 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64828696-910a-4780-90f7-7022cb08c19f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:48 crc kubenswrapper[4893]: I0128 15:31:48.892922 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skhjg\" (UniqueName: \"kubernetes.io/projected/64828696-910a-4780-90f7-7022cb08c19f-kube-api-access-skhjg\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:48 crc kubenswrapper[4893]: I0128 15:31:48.993812 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn7hr\" (UniqueName: \"kubernetes.io/projected/3bf82a69-6a29-4c98-8e72-1d4f4a73edda-kube-api-access-gn7hr\") pod \"3bf82a69-6a29-4c98-8e72-1d4f4a73edda\" (UID: \"3bf82a69-6a29-4c98-8e72-1d4f4a73edda\") " Jan 28 15:31:48 crc kubenswrapper[4893]: I0128 15:31:48.994681 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf82a69-6a29-4c98-8e72-1d4f4a73edda-operator-scripts\") pod \"3bf82a69-6a29-4c98-8e72-1d4f4a73edda\" (UID: \"3bf82a69-6a29-4c98-8e72-1d4f4a73edda\") " Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:48.995883 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf82a69-6a29-4c98-8e72-1d4f4a73edda-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bf82a69-6a29-4c98-8e72-1d4f4a73edda" (UID: "3bf82a69-6a29-4c98-8e72-1d4f4a73edda"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.011822 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf82a69-6a29-4c98-8e72-1d4f4a73edda-kube-api-access-gn7hr" (OuterVolumeSpecName: "kube-api-access-gn7hr") pod "3bf82a69-6a29-4c98-8e72-1d4f4a73edda" (UID: "3bf82a69-6a29-4c98-8e72-1d4f4a73edda"). InnerVolumeSpecName "kube-api-access-gn7hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.040727 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-9sqgr" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.057112 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.079378 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-n4ks8" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.097386 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn7hr\" (UniqueName: \"kubernetes.io/projected/3bf82a69-6a29-4c98-8e72-1d4f4a73edda-kube-api-access-gn7hr\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.098013 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf82a69-6a29-4c98-8e72-1d4f4a73edda-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.198933 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410a2c70-a715-4d47-a056-ff7d2ca6e79f-operator-scripts\") pod \"410a2c70-a715-4d47-a056-ff7d2ca6e79f\" (UID: \"410a2c70-a715-4d47-a056-ff7d2ca6e79f\") " Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.199409 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45994969-6957-49cd-95cc-3da11b3f8a53-operator-scripts\") pod \"45994969-6957-49cd-95cc-3da11b3f8a53\" (UID: \"45994969-6957-49cd-95cc-3da11b3f8a53\") " Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.199554 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a62af92-8f89-4800-9724-c651058a0cf2-operator-scripts\") pod \"3a62af92-8f89-4800-9724-c651058a0cf2\" (UID: \"3a62af92-8f89-4800-9724-c651058a0cf2\") " Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.199738 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6gp9\" (UniqueName: \"kubernetes.io/projected/45994969-6957-49cd-95cc-3da11b3f8a53-kube-api-access-p6gp9\") pod \"45994969-6957-49cd-95cc-3da11b3f8a53\" (UID: \"45994969-6957-49cd-95cc-3da11b3f8a53\") " Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.199831 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x45wg\" (UniqueName: \"kubernetes.io/projected/410a2c70-a715-4d47-a056-ff7d2ca6e79f-kube-api-access-x45wg\") pod \"410a2c70-a715-4d47-a056-ff7d2ca6e79f\" (UID: \"410a2c70-a715-4d47-a056-ff7d2ca6e79f\") " Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.199937 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-649rf\" (UniqueName: \"kubernetes.io/projected/3a62af92-8f89-4800-9724-c651058a0cf2-kube-api-access-649rf\") pod \"3a62af92-8f89-4800-9724-c651058a0cf2\" (UID: \"3a62af92-8f89-4800-9724-c651058a0cf2\") " Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.199547 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/410a2c70-a715-4d47-a056-ff7d2ca6e79f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "410a2c70-a715-4d47-a056-ff7d2ca6e79f" (UID: "410a2c70-a715-4d47-a056-ff7d2ca6e79f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.200190 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a62af92-8f89-4800-9724-c651058a0cf2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a62af92-8f89-4800-9724-c651058a0cf2" (UID: "3a62af92-8f89-4800-9724-c651058a0cf2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.200252 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45994969-6957-49cd-95cc-3da11b3f8a53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45994969-6957-49cd-95cc-3da11b3f8a53" (UID: "45994969-6957-49cd-95cc-3da11b3f8a53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.200631 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410a2c70-a715-4d47-a056-ff7d2ca6e79f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.200771 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45994969-6957-49cd-95cc-3da11b3f8a53-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.200900 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a62af92-8f89-4800-9724-c651058a0cf2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.203018 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410a2c70-a715-4d47-a056-ff7d2ca6e79f-kube-api-access-x45wg" (OuterVolumeSpecName: "kube-api-access-x45wg") pod "410a2c70-a715-4d47-a056-ff7d2ca6e79f" (UID: "410a2c70-a715-4d47-a056-ff7d2ca6e79f"). InnerVolumeSpecName "kube-api-access-x45wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.203066 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a62af92-8f89-4800-9724-c651058a0cf2-kube-api-access-649rf" (OuterVolumeSpecName: "kube-api-access-649rf") pod "3a62af92-8f89-4800-9724-c651058a0cf2" (UID: "3a62af92-8f89-4800-9724-c651058a0cf2"). InnerVolumeSpecName "kube-api-access-649rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.203384 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45994969-6957-49cd-95cc-3da11b3f8a53-kube-api-access-p6gp9" (OuterVolumeSpecName: "kube-api-access-p6gp9") pod "45994969-6957-49cd-95cc-3da11b3f8a53" (UID: "45994969-6957-49cd-95cc-3da11b3f8a53"). InnerVolumeSpecName "kube-api-access-p6gp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.303054 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6gp9\" (UniqueName: \"kubernetes.io/projected/45994969-6957-49cd-95cc-3da11b3f8a53-kube-api-access-p6gp9\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.303096 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x45wg\" (UniqueName: \"kubernetes.io/projected/410a2c70-a715-4d47-a056-ff7d2ca6e79f-kube-api-access-x45wg\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.303110 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-649rf\" (UniqueName: \"kubernetes.io/projected/3a62af92-8f89-4800-9724-c651058a0cf2-kube-api-access-649rf\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.425932 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.425924 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj" event={"ID":"64828696-910a-4780-90f7-7022cb08c19f","Type":"ContainerDied","Data":"51ff38a6fd001c730ac63e8dcaa0912a5dc8818ad97a718c9767456f96a3190b"} Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.426060 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51ff38a6fd001c730ac63e8dcaa0912a5dc8818ad97a718c9767456f96a3190b" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.429898 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-9sqgr" event={"ID":"3a62af92-8f89-4800-9724-c651058a0cf2","Type":"ContainerDied","Data":"b40d1a41988ec8a2d3476064e506a661bdef681b16cbb4df39ad0b87d7284809"} Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.429932 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-9sqgr" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.429946 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b40d1a41988ec8a2d3476064e506a661bdef681b16cbb4df39ad0b87d7284809" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.432403 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d" event={"ID":"3bf82a69-6a29-4c98-8e72-1d4f4a73edda","Type":"ContainerDied","Data":"b8bf9a49317ddbd3573ecde8c1bb96d7adb9cebcf5b6b035730d223a68390d1d"} Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.432441 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8bf9a49317ddbd3573ecde8c1bb96d7adb9cebcf5b6b035730d223a68390d1d" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.432486 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.433989 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm" event={"ID":"45994969-6957-49cd-95cc-3da11b3f8a53","Type":"ContainerDied","Data":"392eee7f0ca217191789ac51d110538f0bb2ec7cb037126685ca1fd0d029d474"} Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.434035 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="392eee7f0ca217191789ac51d110538f0bb2ec7cb037126685ca1fd0d029d474" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.434074 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.435502 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-n4ks8" event={"ID":"410a2c70-a715-4d47-a056-ff7d2ca6e79f","Type":"ContainerDied","Data":"cb613d2f7bebb12947561e090e22dc823168c5709b8c5a81229f2324fde2e7d1"} Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.435535 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb613d2f7bebb12947561e090e22dc823168c5709b8c5a81229f2324fde2e7d1" Jan 28 15:31:49 crc kubenswrapper[4893]: I0128 15:31:49.435586 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-n4ks8" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.744632 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 15:31:50 crc kubenswrapper[4893]: E0128 15:31:50.745351 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410a2c70-a715-4d47-a056-ff7d2ca6e79f" containerName="mariadb-database-create" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.745370 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="410a2c70-a715-4d47-a056-ff7d2ca6e79f" containerName="mariadb-database-create" Jan 28 15:31:50 crc kubenswrapper[4893]: E0128 15:31:50.745380 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a62af92-8f89-4800-9724-c651058a0cf2" containerName="mariadb-database-create" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.745389 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a62af92-8f89-4800-9724-c651058a0cf2" containerName="mariadb-database-create" Jan 28 15:31:50 crc kubenswrapper[4893]: E0128 15:31:50.745414 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64828696-910a-4780-90f7-7022cb08c19f" containerName="mariadb-account-create-update" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.745421 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="64828696-910a-4780-90f7-7022cb08c19f" containerName="mariadb-account-create-update" Jan 28 15:31:50 crc kubenswrapper[4893]: E0128 15:31:50.745465 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf82a69-6a29-4c98-8e72-1d4f4a73edda" containerName="mariadb-account-create-update" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.745496 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf82a69-6a29-4c98-8e72-1d4f4a73edda" containerName="mariadb-account-create-update" Jan 28 15:31:50 crc kubenswrapper[4893]: E0128 15:31:50.745512 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45994969-6957-49cd-95cc-3da11b3f8a53" containerName="mariadb-account-create-update" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.745519 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="45994969-6957-49cd-95cc-3da11b3f8a53" containerName="mariadb-account-create-update" Jan 28 15:31:50 crc kubenswrapper[4893]: E0128 15:31:50.745530 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99a38c89-ae5a-4e48-8816-423ce2312cc0" containerName="mariadb-database-create" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.745538 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="99a38c89-ae5a-4e48-8816-423ce2312cc0" containerName="mariadb-database-create" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.745716 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="99a38c89-ae5a-4e48-8816-423ce2312cc0" containerName="mariadb-database-create" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.745736 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="45994969-6957-49cd-95cc-3da11b3f8a53" containerName="mariadb-account-create-update" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.745747 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a62af92-8f89-4800-9724-c651058a0cf2" containerName="mariadb-database-create" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.745758 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="64828696-910a-4780-90f7-7022cb08c19f" containerName="mariadb-account-create-update" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.745774 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf82a69-6a29-4c98-8e72-1d4f4a73edda" containerName="mariadb-account-create-update" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.745786 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="410a2c70-a715-4d47-a056-ff7d2ca6e79f" containerName="mariadb-database-create" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.746554 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.756845 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-compute-fake1-compute-config-data" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.757096 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-9wfv9" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.763203 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5"] Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.764311 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.772849 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.775238 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.781015 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.814281 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5"] Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.829870 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwswj\" (UniqueName: \"kubernetes.io/projected/052e4427-b04a-4a64-80de-5186db93716f-kube-api-access-kwswj\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"052e4427-b04a-4a64-80de-5186db93716f\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.829962 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/052e4427-b04a-4a64-80de-5186db93716f-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"052e4427-b04a-4a64-80de-5186db93716f\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.867247 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.868947 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.877378 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.877424 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.934744 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-p4zn5\" (UID: \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.935091 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-p4zn5\" (UID: \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.935172 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwswj\" (UniqueName: \"kubernetes.io/projected/052e4427-b04a-4a64-80de-5186db93716f-kube-api-access-kwswj\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"052e4427-b04a-4a64-80de-5186db93716f\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.935370 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vczv8\" (UniqueName: \"kubernetes.io/projected/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-kube-api-access-vczv8\") pod \"nova-kuttl-cell1-conductor-db-sync-p4zn5\" (UID: \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.935415 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/052e4427-b04a-4a64-80de-5186db93716f-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"052e4427-b04a-4a64-80de-5186db93716f\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.945559 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/052e4427-b04a-4a64-80de-5186db93716f-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"052e4427-b04a-4a64-80de-5186db93716f\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:31:50 crc kubenswrapper[4893]: I0128 15:31:50.955198 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwswj\" (UniqueName: \"kubernetes.io/projected/052e4427-b04a-4a64-80de-5186db93716f-kube-api-access-kwswj\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"052e4427-b04a-4a64-80de-5186db93716f\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.037546 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-p4zn5\" (UID: \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.037626 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5a4d01-0aec-4669-9f2f-20654ea7b9ce-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"8c5a4d01-0aec-4669-9f2f-20654ea7b9ce\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.037799 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-p4zn5\" (UID: \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.037893 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thqg9\" (UniqueName: \"kubernetes.io/projected/8c5a4d01-0aec-4669-9f2f-20654ea7b9ce-kube-api-access-thqg9\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"8c5a4d01-0aec-4669-9f2f-20654ea7b9ce\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.038014 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vczv8\" (UniqueName: \"kubernetes.io/projected/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-kube-api-access-vczv8\") pod \"nova-kuttl-cell1-conductor-db-sync-p4zn5\" (UID: \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.042028 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-p4zn5\" (UID: \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.043244 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-p4zn5\" (UID: \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.064945 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vczv8\" (UniqueName: \"kubernetes.io/projected/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-kube-api-access-vczv8\") pod \"nova-kuttl-cell1-conductor-db-sync-p4zn5\" (UID: \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.078684 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.103656 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.140171 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5a4d01-0aec-4669-9f2f-20654ea7b9ce-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"8c5a4d01-0aec-4669-9f2f-20654ea7b9ce\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.140325 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thqg9\" (UniqueName: \"kubernetes.io/projected/8c5a4d01-0aec-4669-9f2f-20654ea7b9ce-kube-api-access-thqg9\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"8c5a4d01-0aec-4669-9f2f-20654ea7b9ce\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.149233 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5a4d01-0aec-4669-9f2f-20654ea7b9ce-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"8c5a4d01-0aec-4669-9f2f-20654ea7b9ce\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.161721 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thqg9\" (UniqueName: \"kubernetes.io/projected/8c5a4d01-0aec-4669-9f2f-20654ea7b9ce-kube-api-access-thqg9\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"8c5a4d01-0aec-4669-9f2f-20654ea7b9ce\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.196923 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.533678 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.624547 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5"] Jan 28 15:31:51 crc kubenswrapper[4893]: W0128 15:31:51.626692 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eb8455a_7cd7_42d6_b9a2_c99841ba7f03.slice/crio-518bec51fd5935fea9c9a0dc89b9b65ebf96d928c065ea4d69613f7233b19381 WatchSource:0}: Error finding container 518bec51fd5935fea9c9a0dc89b9b65ebf96d928c065ea4d69613f7233b19381: Status 404 returned error can't find the container with id 518bec51fd5935fea9c9a0dc89b9b65ebf96d928c065ea4d69613f7233b19381 Jan 28 15:31:51 crc kubenswrapper[4893]: I0128 15:31:51.746435 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:31:51 crc kubenswrapper[4893]: W0128 15:31:51.750216 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c5a4d01_0aec_4669_9f2f_20654ea7b9ce.slice/crio-56968aec4d64b3941afabc505148b6a190041ab5d8d20d144ecc4e8f522e27ab WatchSource:0}: Error finding container 56968aec4d64b3941afabc505148b6a190041ab5d8d20d144ecc4e8f522e27ab: Status 404 returned error can't find the container with id 56968aec4d64b3941afabc505148b6a190041ab5d8d20d144ecc4e8f522e27ab Jan 28 15:31:52 crc kubenswrapper[4893]: I0128 15:31:52.469262 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"052e4427-b04a-4a64-80de-5186db93716f","Type":"ContainerStarted","Data":"6e3f37652ed4531378f4df6f6a20d47acbc1f4712b4a665aa9b11a07f317b831"} Jan 28 15:31:52 crc kubenswrapper[4893]: I0128 15:31:52.471022 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" event={"ID":"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03","Type":"ContainerStarted","Data":"4c212a020f51331b43f6394d092d80a2c6ebc176b74b2247ec0f73b2031d7a82"} Jan 28 15:31:52 crc kubenswrapper[4893]: I0128 15:31:52.471065 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" event={"ID":"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03","Type":"ContainerStarted","Data":"518bec51fd5935fea9c9a0dc89b9b65ebf96d928c065ea4d69613f7233b19381"} Jan 28 15:31:52 crc kubenswrapper[4893]: I0128 15:31:52.474018 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"8c5a4d01-0aec-4669-9f2f-20654ea7b9ce","Type":"ContainerStarted","Data":"a97d631d7a902e3eaf7932b633a9bd9f79294ad9076f749475d2fa6316079a63"} Jan 28 15:31:52 crc kubenswrapper[4893]: I0128 15:31:52.474066 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"8c5a4d01-0aec-4669-9f2f-20654ea7b9ce","Type":"ContainerStarted","Data":"56968aec4d64b3941afabc505148b6a190041ab5d8d20d144ecc4e8f522e27ab"} Jan 28 15:31:52 crc kubenswrapper[4893]: I0128 15:31:52.491375 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" podStartSLOduration=2.491350774 podStartE2EDuration="2.491350774s" podCreationTimestamp="2026-01-28 15:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:31:52.485529716 +0000 UTC m=+1830.259144764" watchObservedRunningTime="2026-01-28 15:31:52.491350774 +0000 UTC m=+1830.264965802" Jan 28 15:31:52 crc kubenswrapper[4893]: I0128 15:31:52.512085 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.5120632670000003 podStartE2EDuration="2.512063267s" podCreationTimestamp="2026-01-28 15:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:31:52.501276014 +0000 UTC m=+1830.274891062" watchObservedRunningTime="2026-01-28 15:31:52.512063267 +0000 UTC m=+1830.285678295" Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.250866 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8"] Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.252552 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.255650 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.261417 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.268981 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8"] Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.317961 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l4jr\" (UniqueName: \"kubernetes.io/projected/b2e8bf8c-3035-4698-bf63-c309167ce05a-kube-api-access-4l4jr\") pod \"nova-kuttl-cell0-conductor-db-sync-n8hj8\" (UID: \"b2e8bf8c-3035-4698-bf63-c309167ce05a\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.318112 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e8bf8c-3035-4698-bf63-c309167ce05a-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-n8hj8\" (UID: \"b2e8bf8c-3035-4698-bf63-c309167ce05a\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.318159 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e8bf8c-3035-4698-bf63-c309167ce05a-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-n8hj8\" (UID: \"b2e8bf8c-3035-4698-bf63-c309167ce05a\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.419657 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l4jr\" (UniqueName: \"kubernetes.io/projected/b2e8bf8c-3035-4698-bf63-c309167ce05a-kube-api-access-4l4jr\") pod \"nova-kuttl-cell0-conductor-db-sync-n8hj8\" (UID: \"b2e8bf8c-3035-4698-bf63-c309167ce05a\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.419841 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e8bf8c-3035-4698-bf63-c309167ce05a-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-n8hj8\" (UID: \"b2e8bf8c-3035-4698-bf63-c309167ce05a\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.419868 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e8bf8c-3035-4698-bf63-c309167ce05a-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-n8hj8\" (UID: \"b2e8bf8c-3035-4698-bf63-c309167ce05a\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.437361 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e8bf8c-3035-4698-bf63-c309167ce05a-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-n8hj8\" (UID: \"b2e8bf8c-3035-4698-bf63-c309167ce05a\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.437590 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e8bf8c-3035-4698-bf63-c309167ce05a-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-n8hj8\" (UID: \"b2e8bf8c-3035-4698-bf63-c309167ce05a\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.440782 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l4jr\" (UniqueName: \"kubernetes.io/projected/b2e8bf8c-3035-4698-bf63-c309167ce05a-kube-api-access-4l4jr\") pod \"nova-kuttl-cell0-conductor-db-sync-n8hj8\" (UID: \"b2e8bf8c-3035-4698-bf63-c309167ce05a\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.507428 4893 generic.go:334] "Generic (PLEG): container finished" podID="9eb8455a-7cd7-42d6-b9a2-c99841ba7f03" containerID="4c212a020f51331b43f6394d092d80a2c6ebc176b74b2247ec0f73b2031d7a82" exitCode=0 Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.507457 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" event={"ID":"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03","Type":"ContainerDied","Data":"4c212a020f51331b43f6394d092d80a2c6ebc176b74b2247ec0f73b2031d7a82"} Jan 28 15:31:55 crc kubenswrapper[4893]: I0128 15:31:55.579910 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" Jan 28 15:31:56 crc kubenswrapper[4893]: I0128 15:31:56.045312 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8"] Jan 28 15:31:56 crc kubenswrapper[4893]: I0128 15:31:56.197861 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:31:56 crc kubenswrapper[4893]: I0128 15:31:56.519324 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" event={"ID":"b2e8bf8c-3035-4698-bf63-c309167ce05a","Type":"ContainerStarted","Data":"39cce8be2d123446b1c3511dba6cb888613a2a0effdd5d86d24143e5bb07ae19"} Jan 28 15:31:56 crc kubenswrapper[4893]: I0128 15:31:56.520037 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" event={"ID":"b2e8bf8c-3035-4698-bf63-c309167ce05a","Type":"ContainerStarted","Data":"e7aa58b5e3b90a73f8a55c42a881a7a4a928a01817b1b7b60d6ea8cbbbce7a6a"} Jan 28 15:31:56 crc kubenswrapper[4893]: I0128 15:31:56.887105 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" Jan 28 15:31:56 crc kubenswrapper[4893]: I0128 15:31:56.892161 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:31:56 crc kubenswrapper[4893]: E0128 15:31:56.892565 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:31:56 crc kubenswrapper[4893]: I0128 15:31:56.909817 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" podStartSLOduration=1.909764788 podStartE2EDuration="1.909764788s" podCreationTimestamp="2026-01-28 15:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:31:56.547958474 +0000 UTC m=+1834.321573502" watchObservedRunningTime="2026-01-28 15:31:56.909764788 +0000 UTC m=+1834.683379826" Jan 28 15:31:56 crc kubenswrapper[4893]: I0128 15:31:56.954019 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-config-data\") pod \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\" (UID: \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\") " Jan 28 15:31:56 crc kubenswrapper[4893]: I0128 15:31:56.954168 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-scripts\") pod \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\" (UID: \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\") " Jan 28 15:31:56 crc kubenswrapper[4893]: I0128 15:31:56.954223 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vczv8\" (UniqueName: \"kubernetes.io/projected/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-kube-api-access-vczv8\") pod \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\" (UID: \"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03\") " Jan 28 15:31:56 crc kubenswrapper[4893]: I0128 15:31:56.960546 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-scripts" (OuterVolumeSpecName: "scripts") pod "9eb8455a-7cd7-42d6-b9a2-c99841ba7f03" (UID: "9eb8455a-7cd7-42d6-b9a2-c99841ba7f03"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:56 crc kubenswrapper[4893]: I0128 15:31:56.976966 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-kube-api-access-vczv8" (OuterVolumeSpecName: "kube-api-access-vczv8") pod "9eb8455a-7cd7-42d6-b9a2-c99841ba7f03" (UID: "9eb8455a-7cd7-42d6-b9a2-c99841ba7f03"). InnerVolumeSpecName "kube-api-access-vczv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:56 crc kubenswrapper[4893]: I0128 15:31:56.977861 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-config-data" (OuterVolumeSpecName: "config-data") pod "9eb8455a-7cd7-42d6-b9a2-c99841ba7f03" (UID: "9eb8455a-7cd7-42d6-b9a2-c99841ba7f03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.057460 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.057510 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.057522 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vczv8\" (UniqueName: \"kubernetes.io/projected/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03-kube-api-access-vczv8\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.531986 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.532142 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5" event={"ID":"9eb8455a-7cd7-42d6-b9a2-c99841ba7f03","Type":"ContainerDied","Data":"518bec51fd5935fea9c9a0dc89b9b65ebf96d928c065ea4d69613f7233b19381"} Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.532210 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="518bec51fd5935fea9c9a0dc89b9b65ebf96d928c065ea4d69613f7233b19381" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.608402 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:31:57 crc kubenswrapper[4893]: E0128 15:31:57.608874 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb8455a-7cd7-42d6-b9a2-c99841ba7f03" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.608898 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb8455a-7cd7-42d6-b9a2-c99841ba7f03" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.609107 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb8455a-7cd7-42d6-b9a2-c99841ba7f03" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.609857 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.614630 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.619609 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.768386 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66cfadb-4c7f-455c-8625-9ae9c7d0d32d-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"f66cfadb-4c7f-455c-8625-9ae9c7d0d32d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.768871 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wkh9\" (UniqueName: \"kubernetes.io/projected/f66cfadb-4c7f-455c-8625-9ae9c7d0d32d-kube-api-access-4wkh9\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"f66cfadb-4c7f-455c-8625-9ae9c7d0d32d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.871102 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66cfadb-4c7f-455c-8625-9ae9c7d0d32d-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"f66cfadb-4c7f-455c-8625-9ae9c7d0d32d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.871272 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wkh9\" (UniqueName: \"kubernetes.io/projected/f66cfadb-4c7f-455c-8625-9ae9c7d0d32d-kube-api-access-4wkh9\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"f66cfadb-4c7f-455c-8625-9ae9c7d0d32d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.874964 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66cfadb-4c7f-455c-8625-9ae9c7d0d32d-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"f66cfadb-4c7f-455c-8625-9ae9c7d0d32d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.891536 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wkh9\" (UniqueName: \"kubernetes.io/projected/f66cfadb-4c7f-455c-8625-9ae9c7d0d32d-kube-api-access-4wkh9\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"f66cfadb-4c7f-455c-8625-9ae9c7d0d32d\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:31:57 crc kubenswrapper[4893]: I0128 15:31:57.926071 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:32:01 crc kubenswrapper[4893]: I0128 15:32:01.198029 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:32:01 crc kubenswrapper[4893]: I0128 15:32:01.218965 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:32:01 crc kubenswrapper[4893]: I0128 15:32:01.586892 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:32:01 crc kubenswrapper[4893]: E0128 15:32:01.848882 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2e8bf8c_3035_4698_bf63_c309167ce05a.slice/crio-conmon-39cce8be2d123446b1c3511dba6cb888613a2a0effdd5d86d24143e5bb07ae19.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:32:02 crc kubenswrapper[4893]: I0128 15:32:02.375539 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:32:02 crc kubenswrapper[4893]: I0128 15:32:02.593227 4893 generic.go:334] "Generic (PLEG): container finished" podID="b2e8bf8c-3035-4698-bf63-c309167ce05a" containerID="39cce8be2d123446b1c3511dba6cb888613a2a0effdd5d86d24143e5bb07ae19" exitCode=0 Jan 28 15:32:02 crc kubenswrapper[4893]: I0128 15:32:02.593310 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" event={"ID":"b2e8bf8c-3035-4698-bf63-c309167ce05a","Type":"ContainerDied","Data":"39cce8be2d123446b1c3511dba6cb888613a2a0effdd5d86d24143e5bb07ae19"} Jan 28 15:32:02 crc kubenswrapper[4893]: I0128 15:32:02.594670 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"052e4427-b04a-4a64-80de-5186db93716f","Type":"ContainerStarted","Data":"043197cad6de7a3f7f019720e2ceab1229aac5d150e1a6cf5e7519001a2b0e32"} Jan 28 15:32:02 crc kubenswrapper[4893]: I0128 15:32:02.594928 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:32:02 crc kubenswrapper[4893]: I0128 15:32:02.598194 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"f66cfadb-4c7f-455c-8625-9ae9c7d0d32d","Type":"ContainerStarted","Data":"54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877"} Jan 28 15:32:02 crc kubenswrapper[4893]: I0128 15:32:02.598237 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"f66cfadb-4c7f-455c-8625-9ae9c7d0d32d","Type":"ContainerStarted","Data":"6e2ec1ce1f4e365d30e3aeb2df546bf784846e6148d79638f8d6bffceedd8828"} Jan 28 15:32:02 crc kubenswrapper[4893]: I0128 15:32:02.598388 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:32:02 crc kubenswrapper[4893]: I0128 15:32:02.632547 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:32:02 crc kubenswrapper[4893]: I0128 15:32:02.640985 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podStartSLOduration=2.206540352 podStartE2EDuration="12.640969539s" podCreationTimestamp="2026-01-28 15:31:50 +0000 UTC" firstStartedPulling="2026-01-28 15:31:51.534655353 +0000 UTC m=+1829.308270391" lastFinishedPulling="2026-01-28 15:32:01.96908455 +0000 UTC m=+1839.742699578" observedRunningTime="2026-01-28 15:32:02.635068649 +0000 UTC m=+1840.408683677" watchObservedRunningTime="2026-01-28 15:32:02.640969539 +0000 UTC m=+1840.414584557" Jan 28 15:32:02 crc kubenswrapper[4893]: I0128 15:32:02.654140 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=5.654126417 podStartE2EDuration="5.654126417s" podCreationTimestamp="2026-01-28 15:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:02.650690104 +0000 UTC m=+1840.424305142" watchObservedRunningTime="2026-01-28 15:32:02.654126417 +0000 UTC m=+1840.427741445" Jan 28 15:32:03 crc kubenswrapper[4893]: I0128 15:32:03.980640 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.089309 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e8bf8c-3035-4698-bf63-c309167ce05a-scripts\") pod \"b2e8bf8c-3035-4698-bf63-c309167ce05a\" (UID: \"b2e8bf8c-3035-4698-bf63-c309167ce05a\") " Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.089443 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e8bf8c-3035-4698-bf63-c309167ce05a-config-data\") pod \"b2e8bf8c-3035-4698-bf63-c309167ce05a\" (UID: \"b2e8bf8c-3035-4698-bf63-c309167ce05a\") " Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.089506 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l4jr\" (UniqueName: \"kubernetes.io/projected/b2e8bf8c-3035-4698-bf63-c309167ce05a-kube-api-access-4l4jr\") pod \"b2e8bf8c-3035-4698-bf63-c309167ce05a\" (UID: \"b2e8bf8c-3035-4698-bf63-c309167ce05a\") " Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.095752 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e8bf8c-3035-4698-bf63-c309167ce05a-scripts" (OuterVolumeSpecName: "scripts") pod "b2e8bf8c-3035-4698-bf63-c309167ce05a" (UID: "b2e8bf8c-3035-4698-bf63-c309167ce05a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.095827 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2e8bf8c-3035-4698-bf63-c309167ce05a-kube-api-access-4l4jr" (OuterVolumeSpecName: "kube-api-access-4l4jr") pod "b2e8bf8c-3035-4698-bf63-c309167ce05a" (UID: "b2e8bf8c-3035-4698-bf63-c309167ce05a"). InnerVolumeSpecName "kube-api-access-4l4jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.114021 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e8bf8c-3035-4698-bf63-c309167ce05a-config-data" (OuterVolumeSpecName: "config-data") pod "b2e8bf8c-3035-4698-bf63-c309167ce05a" (UID: "b2e8bf8c-3035-4698-bf63-c309167ce05a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.191988 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e8bf8c-3035-4698-bf63-c309167ce05a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.192052 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e8bf8c-3035-4698-bf63-c309167ce05a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.192069 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l4jr\" (UniqueName: \"kubernetes.io/projected/b2e8bf8c-3035-4698-bf63-c309167ce05a-kube-api-access-4l4jr\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.631752 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.632066 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8" event={"ID":"b2e8bf8c-3035-4698-bf63-c309167ce05a","Type":"ContainerDied","Data":"e7aa58b5e3b90a73f8a55c42a881a7a4a928a01817b1b7b60d6ea8cbbbce7a6a"} Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.632105 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7aa58b5e3b90a73f8a55c42a881a7a4a928a01817b1b7b60d6ea8cbbbce7a6a" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.708866 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:32:04 crc kubenswrapper[4893]: E0128 15:32:04.709280 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e8bf8c-3035-4698-bf63-c309167ce05a" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.709297 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e8bf8c-3035-4698-bf63-c309167ce05a" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.709515 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e8bf8c-3035-4698-bf63-c309167ce05a" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.710348 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.713495 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.716380 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.801513 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f31fa8-9788-45e6-b347-f7f697e29075-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"06f31fa8-9788-45e6-b347-f7f697e29075\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.801594 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vknx\" (UniqueName: \"kubernetes.io/projected/06f31fa8-9788-45e6-b347-f7f697e29075-kube-api-access-2vknx\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"06f31fa8-9788-45e6-b347-f7f697e29075\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.903662 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f31fa8-9788-45e6-b347-f7f697e29075-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"06f31fa8-9788-45e6-b347-f7f697e29075\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.903721 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vknx\" (UniqueName: \"kubernetes.io/projected/06f31fa8-9788-45e6-b347-f7f697e29075-kube-api-access-2vknx\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"06f31fa8-9788-45e6-b347-f7f697e29075\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.908621 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f31fa8-9788-45e6-b347-f7f697e29075-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"06f31fa8-9788-45e6-b347-f7f697e29075\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:32:04 crc kubenswrapper[4893]: I0128 15:32:04.925371 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vknx\" (UniqueName: \"kubernetes.io/projected/06f31fa8-9788-45e6-b347-f7f697e29075-kube-api-access-2vknx\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"06f31fa8-9788-45e6-b347-f7f697e29075\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:32:05 crc kubenswrapper[4893]: I0128 15:32:05.035119 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:32:05 crc kubenswrapper[4893]: I0128 15:32:05.539980 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:32:05 crc kubenswrapper[4893]: I0128 15:32:05.645911 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"06f31fa8-9788-45e6-b347-f7f697e29075","Type":"ContainerStarted","Data":"7b050c9d41cf3db349975e9fef0d0ac4d527437d735a20abf6d65141d3a3c669"} Jan 28 15:32:06 crc kubenswrapper[4893]: I0128 15:32:06.068129 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-sync-nj5vq"] Jan 28 15:32:06 crc kubenswrapper[4893]: I0128 15:32:06.076824 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-sync-nj5vq"] Jan 28 15:32:06 crc kubenswrapper[4893]: I0128 15:32:06.655000 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"06f31fa8-9788-45e6-b347-f7f697e29075","Type":"ContainerStarted","Data":"9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7"} Jan 28 15:32:06 crc kubenswrapper[4893]: I0128 15:32:06.655791 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:32:06 crc kubenswrapper[4893]: I0128 15:32:06.673889 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.673867806 podStartE2EDuration="2.673867806s" podCreationTimestamp="2026-01-28 15:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:06.67252897 +0000 UTC m=+1844.446144008" watchObservedRunningTime="2026-01-28 15:32:06.673867806 +0000 UTC m=+1844.447482854" Jan 28 15:32:06 crc kubenswrapper[4893]: I0128 15:32:06.903212 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b91b788-69c1-4fc5-8a75-7a32476dcd02" path="/var/lib/kubelet/pods/0b91b788-69c1-4fc5-8a75-7a32476dcd02/volumes" Jan 28 15:32:07 crc kubenswrapper[4893]: I0128 15:32:07.953673 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.381620 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5"] Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.382951 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.385255 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.386441 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.399969 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs"] Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.401458 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.412912 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs"] Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.434017 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5"] Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.462617 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/669f54db-d6f6-4319-998c-171b213d69d9-scripts\") pod \"nova-kuttl-cell1-host-discover-cbmbs\" (UID: \"669f54db-d6f6-4319-998c-171b213d69d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.462913 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669f54db-d6f6-4319-998c-171b213d69d9-config-data\") pod \"nova-kuttl-cell1-host-discover-cbmbs\" (UID: \"669f54db-d6f6-4319-998c-171b213d69d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.463022 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbfbr\" (UniqueName: \"kubernetes.io/projected/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-kube-api-access-sbfbr\") pod \"nova-kuttl-cell1-cell-mapping-shbq5\" (UID: \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.463164 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-scripts\") pod \"nova-kuttl-cell1-cell-mapping-shbq5\" (UID: \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.463839 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvw6v\" (UniqueName: \"kubernetes.io/projected/669f54db-d6f6-4319-998c-171b213d69d9-kube-api-access-dvw6v\") pod \"nova-kuttl-cell1-host-discover-cbmbs\" (UID: \"669f54db-d6f6-4319-998c-171b213d69d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.463961 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-config-data\") pod \"nova-kuttl-cell1-cell-mapping-shbq5\" (UID: \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.565925 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-scripts\") pod \"nova-kuttl-cell1-cell-mapping-shbq5\" (UID: \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.566282 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvw6v\" (UniqueName: \"kubernetes.io/projected/669f54db-d6f6-4319-998c-171b213d69d9-kube-api-access-dvw6v\") pod \"nova-kuttl-cell1-host-discover-cbmbs\" (UID: \"669f54db-d6f6-4319-998c-171b213d69d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.566437 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-config-data\") pod \"nova-kuttl-cell1-cell-mapping-shbq5\" (UID: \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.566610 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/669f54db-d6f6-4319-998c-171b213d69d9-scripts\") pod \"nova-kuttl-cell1-host-discover-cbmbs\" (UID: \"669f54db-d6f6-4319-998c-171b213d69d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.566753 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669f54db-d6f6-4319-998c-171b213d69d9-config-data\") pod \"nova-kuttl-cell1-host-discover-cbmbs\" (UID: \"669f54db-d6f6-4319-998c-171b213d69d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.566858 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbfbr\" (UniqueName: \"kubernetes.io/projected/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-kube-api-access-sbfbr\") pod \"nova-kuttl-cell1-cell-mapping-shbq5\" (UID: \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.571699 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669f54db-d6f6-4319-998c-171b213d69d9-config-data\") pod \"nova-kuttl-cell1-host-discover-cbmbs\" (UID: \"669f54db-d6f6-4319-998c-171b213d69d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.571708 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-config-data\") pod \"nova-kuttl-cell1-cell-mapping-shbq5\" (UID: \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.571878 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/669f54db-d6f6-4319-998c-171b213d69d9-scripts\") pod \"nova-kuttl-cell1-host-discover-cbmbs\" (UID: \"669f54db-d6f6-4319-998c-171b213d69d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.572173 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-scripts\") pod \"nova-kuttl-cell1-cell-mapping-shbq5\" (UID: \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.590628 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbfbr\" (UniqueName: \"kubernetes.io/projected/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-kube-api-access-sbfbr\") pod \"nova-kuttl-cell1-cell-mapping-shbq5\" (UID: \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.590804 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvw6v\" (UniqueName: \"kubernetes.io/projected/669f54db-d6f6-4319-998c-171b213d69d9-kube-api-access-dvw6v\") pod \"nova-kuttl-cell1-host-discover-cbmbs\" (UID: \"669f54db-d6f6-4319-998c-171b213d69d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.706393 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.735097 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" Jan 28 15:32:08 crc kubenswrapper[4893]: I0128 15:32:08.893462 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:32:08 crc kubenswrapper[4893]: E0128 15:32:08.894019 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:32:09 crc kubenswrapper[4893]: I0128 15:32:09.154534 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5"] Jan 28 15:32:09 crc kubenswrapper[4893]: I0128 15:32:09.224615 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs"] Jan 28 15:32:09 crc kubenswrapper[4893]: W0128 15:32:09.225834 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod669f54db_d6f6_4319_998c_171b213d69d9.slice/crio-a26c892a005cc51ebc24327b9643ad5e07906ef6a74149571dba35bd7aeb6813 WatchSource:0}: Error finding container a26c892a005cc51ebc24327b9643ad5e07906ef6a74149571dba35bd7aeb6813: Status 404 returned error can't find the container with id a26c892a005cc51ebc24327b9643ad5e07906ef6a74149571dba35bd7aeb6813 Jan 28 15:32:09 crc kubenswrapper[4893]: I0128 15:32:09.702744 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" event={"ID":"669f54db-d6f6-4319-998c-171b213d69d9","Type":"ContainerStarted","Data":"9ba1c98d82a45c65cdace67b81eceb2f342041ff8e9e2462d8359f03d7643867"} Jan 28 15:32:09 crc kubenswrapper[4893]: I0128 15:32:09.703024 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" event={"ID":"669f54db-d6f6-4319-998c-171b213d69d9","Type":"ContainerStarted","Data":"a26c892a005cc51ebc24327b9643ad5e07906ef6a74149571dba35bd7aeb6813"} Jan 28 15:32:09 crc kubenswrapper[4893]: I0128 15:32:09.706614 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" event={"ID":"c08e48e9-6e2b-4473-9f58-2184de7e8fc8","Type":"ContainerStarted","Data":"0a1369b8f1048e1e0278a952fbefa77b088ce6ec42c8bb388dea79dbb15a2a0d"} Jan 28 15:32:09 crc kubenswrapper[4893]: I0128 15:32:09.706653 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" event={"ID":"c08e48e9-6e2b-4473-9f58-2184de7e8fc8","Type":"ContainerStarted","Data":"49032b0eeb3a3f7d70b4d71077701a5cb41a796029903ed154cf18549f56e533"} Jan 28 15:32:09 crc kubenswrapper[4893]: I0128 15:32:09.723314 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" podStartSLOduration=1.7232966730000001 podStartE2EDuration="1.723296673s" podCreationTimestamp="2026-01-28 15:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:09.719046887 +0000 UTC m=+1847.492661925" watchObservedRunningTime="2026-01-28 15:32:09.723296673 +0000 UTC m=+1847.496911701" Jan 28 15:32:09 crc kubenswrapper[4893]: I0128 15:32:09.734941 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" podStartSLOduration=1.734930259 podStartE2EDuration="1.734930259s" podCreationTimestamp="2026-01-28 15:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:09.73352013 +0000 UTC m=+1847.507135168" watchObservedRunningTime="2026-01-28 15:32:09.734930259 +0000 UTC m=+1847.508545287" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.060455 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.504567 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52"] Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.505807 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.508861 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.509218 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.516956 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52"] Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.644166 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.645901 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.662322 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.676634 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.702165 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6m5n\" (UniqueName: \"kubernetes.io/projected/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-kube-api-access-k6m5n\") pod \"nova-kuttl-cell0-cell-mapping-w6p52\" (UID: \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.702253 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-scripts\") pod \"nova-kuttl-cell0-cell-mapping-w6p52\" (UID: \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.702303 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-config-data\") pod \"nova-kuttl-cell0-cell-mapping-w6p52\" (UID: \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.744628 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.756126 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.758798 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.774208 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.775432 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.789362 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.792821 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.800400 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.806247 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.806307 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mgf4\" (UniqueName: \"kubernetes.io/projected/f1852e80-6c82-46a4-ba72-3cf5f0b598fc-kube-api-access-2mgf4\") pod \"nova-kuttl-scheduler-0\" (UID: \"f1852e80-6c82-46a4-ba72-3cf5f0b598fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.806330 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.806359 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1852e80-6c82-46a4-ba72-3cf5f0b598fc-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"f1852e80-6c82-46a4-ba72-3cf5f0b598fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.806374 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbp8w\" (UniqueName: \"kubernetes.io/projected/d052a7a9-b356-4a86-aa5c-ec11b107b922-kube-api-access-vbp8w\") pod \"nova-kuttl-api-0\" (UID: \"d052a7a9-b356-4a86-aa5c-ec11b107b922\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.806398 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll2tl\" (UniqueName: \"kubernetes.io/projected/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-kube-api-access-ll2tl\") pod \"nova-kuttl-metadata-0\" (UID: \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.806445 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6m5n\" (UniqueName: \"kubernetes.io/projected/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-kube-api-access-k6m5n\") pod \"nova-kuttl-cell0-cell-mapping-w6p52\" (UID: \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.806464 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d052a7a9-b356-4a86-aa5c-ec11b107b922-config-data\") pod \"nova-kuttl-api-0\" (UID: \"d052a7a9-b356-4a86-aa5c-ec11b107b922\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.806519 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-scripts\") pod \"nova-kuttl-cell0-cell-mapping-w6p52\" (UID: \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.806571 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-config-data\") pod \"nova-kuttl-cell0-cell-mapping-w6p52\" (UID: \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.806595 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d052a7a9-b356-4a86-aa5c-ec11b107b922-logs\") pod \"nova-kuttl-api-0\" (UID: \"d052a7a9-b356-4a86-aa5c-ec11b107b922\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.816737 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-scripts\") pod \"nova-kuttl-cell0-cell-mapping-w6p52\" (UID: \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.825290 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-config-data\") pod \"nova-kuttl-cell0-cell-mapping-w6p52\" (UID: \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.836505 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6m5n\" (UniqueName: \"kubernetes.io/projected/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-kube-api-access-k6m5n\") pod \"nova-kuttl-cell0-cell-mapping-w6p52\" (UID: \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.908137 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d052a7a9-b356-4a86-aa5c-ec11b107b922-logs\") pod \"nova-kuttl-api-0\" (UID: \"d052a7a9-b356-4a86-aa5c-ec11b107b922\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.908229 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.908284 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mgf4\" (UniqueName: \"kubernetes.io/projected/f1852e80-6c82-46a4-ba72-3cf5f0b598fc-kube-api-access-2mgf4\") pod \"nova-kuttl-scheduler-0\" (UID: \"f1852e80-6c82-46a4-ba72-3cf5f0b598fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.908318 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.908346 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1852e80-6c82-46a4-ba72-3cf5f0b598fc-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"f1852e80-6c82-46a4-ba72-3cf5f0b598fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.908374 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbp8w\" (UniqueName: \"kubernetes.io/projected/d052a7a9-b356-4a86-aa5c-ec11b107b922-kube-api-access-vbp8w\") pod \"nova-kuttl-api-0\" (UID: \"d052a7a9-b356-4a86-aa5c-ec11b107b922\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.908409 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll2tl\" (UniqueName: \"kubernetes.io/projected/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-kube-api-access-ll2tl\") pod \"nova-kuttl-metadata-0\" (UID: \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.908519 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d052a7a9-b356-4a86-aa5c-ec11b107b922-config-data\") pod \"nova-kuttl-api-0\" (UID: \"d052a7a9-b356-4a86-aa5c-ec11b107b922\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.909419 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d052a7a9-b356-4a86-aa5c-ec11b107b922-logs\") pod \"nova-kuttl-api-0\" (UID: \"d052a7a9-b356-4a86-aa5c-ec11b107b922\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.910023 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.915163 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1852e80-6c82-46a4-ba72-3cf5f0b598fc-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"f1852e80-6c82-46a4-ba72-3cf5f0b598fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.915788 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.933357 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll2tl\" (UniqueName: \"kubernetes.io/projected/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-kube-api-access-ll2tl\") pod \"nova-kuttl-metadata-0\" (UID: \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.934724 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mgf4\" (UniqueName: \"kubernetes.io/projected/f1852e80-6c82-46a4-ba72-3cf5f0b598fc-kube-api-access-2mgf4\") pod \"nova-kuttl-scheduler-0\" (UID: \"f1852e80-6c82-46a4-ba72-3cf5f0b598fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.937980 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbp8w\" (UniqueName: \"kubernetes.io/projected/d052a7a9-b356-4a86-aa5c-ec11b107b922-kube-api-access-vbp8w\") pod \"nova-kuttl-api-0\" (UID: \"d052a7a9-b356-4a86-aa5c-ec11b107b922\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.939394 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d052a7a9-b356-4a86-aa5c-ec11b107b922-config-data\") pod \"nova-kuttl-api-0\" (UID: \"d052a7a9-b356-4a86-aa5c-ec11b107b922\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:10 crc kubenswrapper[4893]: I0128 15:32:10.965006 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:11 crc kubenswrapper[4893]: I0128 15:32:11.090748 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:11 crc kubenswrapper[4893]: I0128 15:32:11.134499 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" Jan 28 15:32:11 crc kubenswrapper[4893]: I0128 15:32:11.180620 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:11 crc kubenswrapper[4893]: I0128 15:32:11.491841 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:11 crc kubenswrapper[4893]: I0128 15:32:11.725614 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d052a7a9-b356-4a86-aa5c-ec11b107b922","Type":"ContainerStarted","Data":"b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3"} Jan 28 15:32:11 crc kubenswrapper[4893]: I0128 15:32:11.725658 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d052a7a9-b356-4a86-aa5c-ec11b107b922","Type":"ContainerStarted","Data":"a984114899e2f9bf7febf391eb61faaa10f0cdfecf92cbc0cf90e357fd27c99b"} Jan 28 15:32:11 crc kubenswrapper[4893]: I0128 15:32:11.775353 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:11 crc kubenswrapper[4893]: W0128 15:32:11.799815 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd2f963f_4ec9_4dfb_adb2_78604d438dfb.slice/crio-1a5a787e3719f0b7568fc602b6f7b317da2ce375f258453123b6b8235ac0e1d6 WatchSource:0}: Error finding container 1a5a787e3719f0b7568fc602b6f7b317da2ce375f258453123b6b8235ac0e1d6: Status 404 returned error can't find the container with id 1a5a787e3719f0b7568fc602b6f7b317da2ce375f258453123b6b8235ac0e1d6 Jan 28 15:32:11 crc kubenswrapper[4893]: I0128 15:32:11.885540 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:32:11 crc kubenswrapper[4893]: W0128 15:32:11.902796 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1852e80_6c82_46a4_ba72_3cf5f0b598fc.slice/crio-a112140f5e442f77f2ffeaf0749418b3ee622f74760b1b99c2b977dd14a47a87 WatchSource:0}: Error finding container a112140f5e442f77f2ffeaf0749418b3ee622f74760b1b99c2b977dd14a47a87: Status 404 returned error can't find the container with id a112140f5e442f77f2ffeaf0749418b3ee622f74760b1b99c2b977dd14a47a87 Jan 28 15:32:11 crc kubenswrapper[4893]: I0128 15:32:11.912965 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52"] Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.734825 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" event={"ID":"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33","Type":"ContainerStarted","Data":"176db77ef62236850a5a811427593898e8b70ac8aeafe7650d275e55cc72f6bf"} Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.734876 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" event={"ID":"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33","Type":"ContainerStarted","Data":"efe6770280054f49da87b112be659c1c838bd8b23ccd24deb3545d88675d9411"} Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.736971 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"f1852e80-6c82-46a4-ba72-3cf5f0b598fc","Type":"ContainerStarted","Data":"602abcc6e79fcbcaa5329770a3c8a5b0f1e8c1be470c3e597a1bd6410301e0f9"} Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.737005 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"f1852e80-6c82-46a4-ba72-3cf5f0b598fc","Type":"ContainerStarted","Data":"a112140f5e442f77f2ffeaf0749418b3ee622f74760b1b99c2b977dd14a47a87"} Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.739657 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d052a7a9-b356-4a86-aa5c-ec11b107b922","Type":"ContainerStarted","Data":"6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375"} Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.741902 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"dd2f963f-4ec9-4dfb-adb2-78604d438dfb","Type":"ContainerStarted","Data":"aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5"} Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.741945 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"dd2f963f-4ec9-4dfb-adb2-78604d438dfb","Type":"ContainerStarted","Data":"8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827"} Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.741959 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"dd2f963f-4ec9-4dfb-adb2-78604d438dfb","Type":"ContainerStarted","Data":"1a5a787e3719f0b7568fc602b6f7b317da2ce375f258453123b6b8235ac0e1d6"} Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.743798 4893 generic.go:334] "Generic (PLEG): container finished" podID="669f54db-d6f6-4319-998c-171b213d69d9" containerID="9ba1c98d82a45c65cdace67b81eceb2f342041ff8e9e2462d8359f03d7643867" exitCode=255 Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.743828 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" event={"ID":"669f54db-d6f6-4319-998c-171b213d69d9","Type":"ContainerDied","Data":"9ba1c98d82a45c65cdace67b81eceb2f342041ff8e9e2462d8359f03d7643867"} Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.744097 4893 scope.go:117] "RemoveContainer" containerID="9ba1c98d82a45c65cdace67b81eceb2f342041ff8e9e2462d8359f03d7643867" Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.762807 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" podStartSLOduration=2.76278767 podStartE2EDuration="2.76278767s" podCreationTimestamp="2026-01-28 15:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:12.750947188 +0000 UTC m=+1850.524562216" watchObservedRunningTime="2026-01-28 15:32:12.76278767 +0000 UTC m=+1850.536402698" Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.797614 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.797586546 podStartE2EDuration="2.797586546s" podCreationTimestamp="2026-01-28 15:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:12.78855799 +0000 UTC m=+1850.562173038" watchObservedRunningTime="2026-01-28 15:32:12.797586546 +0000 UTC m=+1850.571201574" Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.815714 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.815693047 podStartE2EDuration="2.815693047s" podCreationTimestamp="2026-01-28 15:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:12.803985469 +0000 UTC m=+1850.577600497" watchObservedRunningTime="2026-01-28 15:32:12.815693047 +0000 UTC m=+1850.589308115" Jan 28 15:32:12 crc kubenswrapper[4893]: I0128 15:32:12.843510 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.843485943 podStartE2EDuration="2.843485943s" podCreationTimestamp="2026-01-28 15:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:12.822949515 +0000 UTC m=+1850.596564553" watchObservedRunningTime="2026-01-28 15:32:12.843485943 +0000 UTC m=+1850.617100981" Jan 28 15:32:13 crc kubenswrapper[4893]: I0128 15:32:13.757525 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" event={"ID":"669f54db-d6f6-4319-998c-171b213d69d9","Type":"ContainerStarted","Data":"c0a69b997ccbe872776643df080ac65a53c48107a6e9f224e6c5c7c8a12875ac"} Jan 28 15:32:15 crc kubenswrapper[4893]: I0128 15:32:15.036376 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-sync-8rs6s"] Jan 28 15:32:15 crc kubenswrapper[4893]: I0128 15:32:15.046201 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-sync-8rs6s"] Jan 28 15:32:15 crc kubenswrapper[4893]: I0128 15:32:15.777525 4893 generic.go:334] "Generic (PLEG): container finished" podID="c08e48e9-6e2b-4473-9f58-2184de7e8fc8" containerID="0a1369b8f1048e1e0278a952fbefa77b088ce6ec42c8bb388dea79dbb15a2a0d" exitCode=0 Jan 28 15:32:15 crc kubenswrapper[4893]: I0128 15:32:15.777579 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" event={"ID":"c08e48e9-6e2b-4473-9f58-2184de7e8fc8","Type":"ContainerDied","Data":"0a1369b8f1048e1e0278a952fbefa77b088ce6ec42c8bb388dea79dbb15a2a0d"} Jan 28 15:32:16 crc kubenswrapper[4893]: I0128 15:32:16.092393 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:16 crc kubenswrapper[4893]: I0128 15:32:16.093142 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:16 crc kubenswrapper[4893]: I0128 15:32:16.181832 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:16 crc kubenswrapper[4893]: I0128 15:32:16.788787 4893 generic.go:334] "Generic (PLEG): container finished" podID="669f54db-d6f6-4319-998c-171b213d69d9" containerID="c0a69b997ccbe872776643df080ac65a53c48107a6e9f224e6c5c7c8a12875ac" exitCode=0 Jan 28 15:32:16 crc kubenswrapper[4893]: I0128 15:32:16.788924 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" event={"ID":"669f54db-d6f6-4319-998c-171b213d69d9","Type":"ContainerDied","Data":"c0a69b997ccbe872776643df080ac65a53c48107a6e9f224e6c5c7c8a12875ac"} Jan 28 15:32:16 crc kubenswrapper[4893]: I0128 15:32:16.789012 4893 scope.go:117] "RemoveContainer" containerID="9ba1c98d82a45c65cdace67b81eceb2f342041ff8e9e2462d8359f03d7643867" Jan 28 15:32:16 crc kubenswrapper[4893]: I0128 15:32:16.908153 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f5dec8-9ade-45ca-b934-5dece754fc53" path="/var/lib/kubelet/pods/17f5dec8-9ade-45ca-b934-5dece754fc53/volumes" Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.200040 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.353660 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbfbr\" (UniqueName: \"kubernetes.io/projected/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-kube-api-access-sbfbr\") pod \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\" (UID: \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\") " Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.354933 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-config-data\") pod \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\" (UID: \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\") " Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.355006 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-scripts\") pod \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\" (UID: \"c08e48e9-6e2b-4473-9f58-2184de7e8fc8\") " Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.358966 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-kube-api-access-sbfbr" (OuterVolumeSpecName: "kube-api-access-sbfbr") pod "c08e48e9-6e2b-4473-9f58-2184de7e8fc8" (UID: "c08e48e9-6e2b-4473-9f58-2184de7e8fc8"). InnerVolumeSpecName "kube-api-access-sbfbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.369021 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-scripts" (OuterVolumeSpecName: "scripts") pod "c08e48e9-6e2b-4473-9f58-2184de7e8fc8" (UID: "c08e48e9-6e2b-4473-9f58-2184de7e8fc8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.378803 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-config-data" (OuterVolumeSpecName: "config-data") pod "c08e48e9-6e2b-4473-9f58-2184de7e8fc8" (UID: "c08e48e9-6e2b-4473-9f58-2184de7e8fc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.457590 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbfbr\" (UniqueName: \"kubernetes.io/projected/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-kube-api-access-sbfbr\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.457637 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.457648 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c08e48e9-6e2b-4473-9f58-2184de7e8fc8-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.809545 4893 generic.go:334] "Generic (PLEG): container finished" podID="2b6fcfff-1e42-4851-a4c9-7a55f8c02a33" containerID="176db77ef62236850a5a811427593898e8b70ac8aeafe7650d275e55cc72f6bf" exitCode=0 Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.809831 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" event={"ID":"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33","Type":"ContainerDied","Data":"176db77ef62236850a5a811427593898e8b70ac8aeafe7650d275e55cc72f6bf"} Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.814291 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" event={"ID":"c08e48e9-6e2b-4473-9f58-2184de7e8fc8","Type":"ContainerDied","Data":"49032b0eeb3a3f7d70b4d71077701a5cb41a796029903ed154cf18549f56e533"} Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.814366 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49032b0eeb3a3f7d70b4d71077701a5cb41a796029903ed154cf18549f56e533" Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.814765 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5" Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.989997 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.990216 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="d052a7a9-b356-4a86-aa5c-ec11b107b922" containerName="nova-kuttl-api-log" containerID="cri-o://b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3" gracePeriod=30 Jan 28 15:32:17 crc kubenswrapper[4893]: I0128 15:32:17.990655 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="d052a7a9-b356-4a86-aa5c-ec11b107b922" containerName="nova-kuttl-api-api" containerID="cri-o://6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375" gracePeriod=30 Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.031699 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.032981 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="f1852e80-6c82-46a4-ba72-3cf5f0b598fc" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://602abcc6e79fcbcaa5329770a3c8a5b0f1e8c1be470c3e597a1bd6410301e0f9" gracePeriod=30 Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.145361 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.145757 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="dd2f963f-4ec9-4dfb-adb2-78604d438dfb" containerName="nova-kuttl-metadata-log" containerID="cri-o://8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827" gracePeriod=30 Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.145870 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="dd2f963f-4ec9-4dfb-adb2-78604d438dfb" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5" gracePeriod=30 Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.382689 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.492094 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.576917 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/669f54db-d6f6-4319-998c-171b213d69d9-scripts\") pod \"669f54db-d6f6-4319-998c-171b213d69d9\" (UID: \"669f54db-d6f6-4319-998c-171b213d69d9\") " Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.577002 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvw6v\" (UniqueName: \"kubernetes.io/projected/669f54db-d6f6-4319-998c-171b213d69d9-kube-api-access-dvw6v\") pod \"669f54db-d6f6-4319-998c-171b213d69d9\" (UID: \"669f54db-d6f6-4319-998c-171b213d69d9\") " Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.577069 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669f54db-d6f6-4319-998c-171b213d69d9-config-data\") pod \"669f54db-d6f6-4319-998c-171b213d69d9\" (UID: \"669f54db-d6f6-4319-998c-171b213d69d9\") " Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.583197 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/669f54db-d6f6-4319-998c-171b213d69d9-kube-api-access-dvw6v" (OuterVolumeSpecName: "kube-api-access-dvw6v") pod "669f54db-d6f6-4319-998c-171b213d69d9" (UID: "669f54db-d6f6-4319-998c-171b213d69d9"). InnerVolumeSpecName "kube-api-access-dvw6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.584031 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/669f54db-d6f6-4319-998c-171b213d69d9-scripts" (OuterVolumeSpecName: "scripts") pod "669f54db-d6f6-4319-998c-171b213d69d9" (UID: "669f54db-d6f6-4319-998c-171b213d69d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.595839 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.602864 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/669f54db-d6f6-4319-998c-171b213d69d9-config-data" (OuterVolumeSpecName: "config-data") pod "669f54db-d6f6-4319-998c-171b213d69d9" (UID: "669f54db-d6f6-4319-998c-171b213d69d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.681676 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbp8w\" (UniqueName: \"kubernetes.io/projected/d052a7a9-b356-4a86-aa5c-ec11b107b922-kube-api-access-vbp8w\") pod \"d052a7a9-b356-4a86-aa5c-ec11b107b922\" (UID: \"d052a7a9-b356-4a86-aa5c-ec11b107b922\") " Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.681855 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d052a7a9-b356-4a86-aa5c-ec11b107b922-config-data\") pod \"d052a7a9-b356-4a86-aa5c-ec11b107b922\" (UID: \"d052a7a9-b356-4a86-aa5c-ec11b107b922\") " Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.681919 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d052a7a9-b356-4a86-aa5c-ec11b107b922-logs\") pod \"d052a7a9-b356-4a86-aa5c-ec11b107b922\" (UID: \"d052a7a9-b356-4a86-aa5c-ec11b107b922\") " Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.682536 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669f54db-d6f6-4319-998c-171b213d69d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.682570 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/669f54db-d6f6-4319-998c-171b213d69d9-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.682580 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvw6v\" (UniqueName: \"kubernetes.io/projected/669f54db-d6f6-4319-998c-171b213d69d9-kube-api-access-dvw6v\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.682721 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d052a7a9-b356-4a86-aa5c-ec11b107b922-logs" (OuterVolumeSpecName: "logs") pod "d052a7a9-b356-4a86-aa5c-ec11b107b922" (UID: "d052a7a9-b356-4a86-aa5c-ec11b107b922"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.685423 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d052a7a9-b356-4a86-aa5c-ec11b107b922-kube-api-access-vbp8w" (OuterVolumeSpecName: "kube-api-access-vbp8w") pod "d052a7a9-b356-4a86-aa5c-ec11b107b922" (UID: "d052a7a9-b356-4a86-aa5c-ec11b107b922"). InnerVolumeSpecName "kube-api-access-vbp8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.704437 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d052a7a9-b356-4a86-aa5c-ec11b107b922-config-data" (OuterVolumeSpecName: "config-data") pod "d052a7a9-b356-4a86-aa5c-ec11b107b922" (UID: "d052a7a9-b356-4a86-aa5c-ec11b107b922"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.783549 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll2tl\" (UniqueName: \"kubernetes.io/projected/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-kube-api-access-ll2tl\") pod \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\" (UID: \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\") " Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.783653 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-logs\") pod \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\" (UID: \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\") " Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.783814 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-config-data\") pod \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\" (UID: \"dd2f963f-4ec9-4dfb-adb2-78604d438dfb\") " Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.784029 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-logs" (OuterVolumeSpecName: "logs") pod "dd2f963f-4ec9-4dfb-adb2-78604d438dfb" (UID: "dd2f963f-4ec9-4dfb-adb2-78604d438dfb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.784153 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbp8w\" (UniqueName: \"kubernetes.io/projected/d052a7a9-b356-4a86-aa5c-ec11b107b922-kube-api-access-vbp8w\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.784169 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.784183 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d052a7a9-b356-4a86-aa5c-ec11b107b922-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.784193 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d052a7a9-b356-4a86-aa5c-ec11b107b922-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.786415 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-kube-api-access-ll2tl" (OuterVolumeSpecName: "kube-api-access-ll2tl") pod "dd2f963f-4ec9-4dfb-adb2-78604d438dfb" (UID: "dd2f963f-4ec9-4dfb-adb2-78604d438dfb"). InnerVolumeSpecName "kube-api-access-ll2tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.808744 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-config-data" (OuterVolumeSpecName: "config-data") pod "dd2f963f-4ec9-4dfb-adb2-78604d438dfb" (UID: "dd2f963f-4ec9-4dfb-adb2-78604d438dfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.834909 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" event={"ID":"669f54db-d6f6-4319-998c-171b213d69d9","Type":"ContainerDied","Data":"a26c892a005cc51ebc24327b9643ad5e07906ef6a74149571dba35bd7aeb6813"} Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.834985 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a26c892a005cc51ebc24327b9643ad5e07906ef6a74149571dba35bd7aeb6813" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.835119 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.847059 4893 generic.go:334] "Generic (PLEG): container finished" podID="d052a7a9-b356-4a86-aa5c-ec11b107b922" containerID="6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375" exitCode=0 Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.847097 4893 generic.go:334] "Generic (PLEG): container finished" podID="d052a7a9-b356-4a86-aa5c-ec11b107b922" containerID="b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3" exitCode=143 Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.847160 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d052a7a9-b356-4a86-aa5c-ec11b107b922","Type":"ContainerDied","Data":"6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375"} Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.847188 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d052a7a9-b356-4a86-aa5c-ec11b107b922","Type":"ContainerDied","Data":"b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3"} Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.847200 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d052a7a9-b356-4a86-aa5c-ec11b107b922","Type":"ContainerDied","Data":"a984114899e2f9bf7febf391eb61faaa10f0cdfecf92cbc0cf90e357fd27c99b"} Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.847215 4893 scope.go:117] "RemoveContainer" containerID="6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.847341 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.855647 4893 generic.go:334] "Generic (PLEG): container finished" podID="dd2f963f-4ec9-4dfb-adb2-78604d438dfb" containerID="aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5" exitCode=0 Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.855997 4893 generic.go:334] "Generic (PLEG): container finished" podID="dd2f963f-4ec9-4dfb-adb2-78604d438dfb" containerID="8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827" exitCode=143 Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.855710 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.855718 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"dd2f963f-4ec9-4dfb-adb2-78604d438dfb","Type":"ContainerDied","Data":"aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5"} Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.859978 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"dd2f963f-4ec9-4dfb-adb2-78604d438dfb","Type":"ContainerDied","Data":"8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827"} Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.860227 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"dd2f963f-4ec9-4dfb-adb2-78604d438dfb","Type":"ContainerDied","Data":"1a5a787e3719f0b7568fc602b6f7b317da2ce375f258453123b6b8235ac0e1d6"} Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.882842 4893 scope.go:117] "RemoveContainer" containerID="b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.894437 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.894491 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll2tl\" (UniqueName: \"kubernetes.io/projected/dd2f963f-4ec9-4dfb-adb2-78604d438dfb-kube-api-access-ll2tl\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.919434 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.920787 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.951563 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.965558 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:18 crc kubenswrapper[4893]: E0128 15:32:18.965908 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d052a7a9-b356-4a86-aa5c-ec11b107b922" containerName="nova-kuttl-api-api" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.965927 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d052a7a9-b356-4a86-aa5c-ec11b107b922" containerName="nova-kuttl-api-api" Jan 28 15:32:18 crc kubenswrapper[4893]: E0128 15:32:18.965939 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08e48e9-6e2b-4473-9f58-2184de7e8fc8" containerName="nova-manage" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.965947 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08e48e9-6e2b-4473-9f58-2184de7e8fc8" containerName="nova-manage" Jan 28 15:32:18 crc kubenswrapper[4893]: E0128 15:32:18.965961 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669f54db-d6f6-4319-998c-171b213d69d9" containerName="nova-manage" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.965967 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="669f54db-d6f6-4319-998c-171b213d69d9" containerName="nova-manage" Jan 28 15:32:18 crc kubenswrapper[4893]: E0128 15:32:18.965984 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2f963f-4ec9-4dfb-adb2-78604d438dfb" containerName="nova-kuttl-metadata-metadata" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.965990 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2f963f-4ec9-4dfb-adb2-78604d438dfb" containerName="nova-kuttl-metadata-metadata" Jan 28 15:32:18 crc kubenswrapper[4893]: E0128 15:32:18.966002 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d052a7a9-b356-4a86-aa5c-ec11b107b922" containerName="nova-kuttl-api-log" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.966009 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d052a7a9-b356-4a86-aa5c-ec11b107b922" containerName="nova-kuttl-api-log" Jan 28 15:32:18 crc kubenswrapper[4893]: E0128 15:32:18.966021 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2f963f-4ec9-4dfb-adb2-78604d438dfb" containerName="nova-kuttl-metadata-log" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.966028 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2f963f-4ec9-4dfb-adb2-78604d438dfb" containerName="nova-kuttl-metadata-log" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.966179 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2f963f-4ec9-4dfb-adb2-78604d438dfb" containerName="nova-kuttl-metadata-metadata" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.966194 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d052a7a9-b356-4a86-aa5c-ec11b107b922" containerName="nova-kuttl-api-log" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.966204 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="669f54db-d6f6-4319-998c-171b213d69d9" containerName="nova-manage" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.966215 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2f963f-4ec9-4dfb-adb2-78604d438dfb" containerName="nova-kuttl-metadata-log" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.966226 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d052a7a9-b356-4a86-aa5c-ec11b107b922" containerName="nova-kuttl-api-api" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.966238 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c08e48e9-6e2b-4473-9f58-2184de7e8fc8" containerName="nova-manage" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.966247 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="669f54db-d6f6-4319-998c-171b213d69d9" containerName="nova-manage" Jan 28 15:32:18 crc kubenswrapper[4893]: E0128 15:32:18.966422 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669f54db-d6f6-4319-998c-171b213d69d9" containerName="nova-manage" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.966433 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="669f54db-d6f6-4319-998c-171b213d69d9" containerName="nova-manage" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.967337 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.971149 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.989850 4893 scope.go:117] "RemoveContainer" containerID="6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375" Jan 28 15:32:18 crc kubenswrapper[4893]: E0128 15:32:18.990398 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375\": container with ID starting with 6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375 not found: ID does not exist" containerID="6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.990444 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375"} err="failed to get container status \"6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375\": rpc error: code = NotFound desc = could not find container \"6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375\": container with ID starting with 6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375 not found: ID does not exist" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.990510 4893 scope.go:117] "RemoveContainer" containerID="b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.990490 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:18 crc kubenswrapper[4893]: E0128 15:32:18.990843 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3\": container with ID starting with b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3 not found: ID does not exist" containerID="b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.990879 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3"} err="failed to get container status \"b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3\": rpc error: code = NotFound desc = could not find container \"b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3\": container with ID starting with b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3 not found: ID does not exist" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.990905 4893 scope.go:117] "RemoveContainer" containerID="6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.992662 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375"} err="failed to get container status \"6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375\": rpc error: code = NotFound desc = could not find container \"6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375\": container with ID starting with 6cb668886a384e5ed2548bfafd4b2d550fba29d47a0164387d394619bc2cc375 not found: ID does not exist" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.992694 4893 scope.go:117] "RemoveContainer" containerID="b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.993977 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3"} err="failed to get container status \"b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3\": rpc error: code = NotFound desc = could not find container \"b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3\": container with ID starting with b96c06aaf7d6f7a1f5e2925e4c6979482f8d6d1dad9c84da462fd1f6f7f3bdb3 not found: ID does not exist" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.994007 4893 scope.go:117] "RemoveContainer" containerID="aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.995384 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/639be1ac-d286-4b83-9dac-e60115db84d8-logs\") pod \"nova-kuttl-api-0\" (UID: \"639be1ac-d286-4b83-9dac-e60115db84d8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.995494 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4gvn\" (UniqueName: \"kubernetes.io/projected/639be1ac-d286-4b83-9dac-e60115db84d8-kube-api-access-q4gvn\") pod \"nova-kuttl-api-0\" (UID: \"639be1ac-d286-4b83-9dac-e60115db84d8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:18 crc kubenswrapper[4893]: I0128 15:32:18.995539 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639be1ac-d286-4b83-9dac-e60115db84d8-config-data\") pod \"nova-kuttl-api-0\" (UID: \"639be1ac-d286-4b83-9dac-e60115db84d8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.006931 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.013600 4893 scope.go:117] "RemoveContainer" containerID="8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.019071 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.020777 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.022977 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.032278 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.048057 4893 scope.go:117] "RemoveContainer" containerID="aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5" Jan 28 15:32:19 crc kubenswrapper[4893]: E0128 15:32:19.052730 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5\": container with ID starting with aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5 not found: ID does not exist" containerID="aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.052793 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5"} err="failed to get container status \"aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5\": rpc error: code = NotFound desc = could not find container \"aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5\": container with ID starting with aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5 not found: ID does not exist" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.052827 4893 scope.go:117] "RemoveContainer" containerID="8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827" Jan 28 15:32:19 crc kubenswrapper[4893]: E0128 15:32:19.053190 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827\": container with ID starting with 8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827 not found: ID does not exist" containerID="8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.053269 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827"} err="failed to get container status \"8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827\": rpc error: code = NotFound desc = could not find container \"8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827\": container with ID starting with 8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827 not found: ID does not exist" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.053323 4893 scope.go:117] "RemoveContainer" containerID="aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.053791 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5"} err="failed to get container status \"aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5\": rpc error: code = NotFound desc = could not find container \"aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5\": container with ID starting with aeb11c58a12ee34ff0c7cc63e768929b206e972b188eb99a6e315f30ac69cde5 not found: ID does not exist" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.053834 4893 scope.go:117] "RemoveContainer" containerID="8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.054247 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827"} err="failed to get container status \"8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827\": rpc error: code = NotFound desc = could not find container \"8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827\": container with ID starting with 8f011d5b65459f49059b060eb5e75272bd083605244b957320ffe2ece9e57827 not found: ID does not exist" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.099528 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4gvn\" (UniqueName: \"kubernetes.io/projected/639be1ac-d286-4b83-9dac-e60115db84d8-kube-api-access-q4gvn\") pod \"nova-kuttl-api-0\" (UID: \"639be1ac-d286-4b83-9dac-e60115db84d8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.099618 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639be1ac-d286-4b83-9dac-e60115db84d8-config-data\") pod \"nova-kuttl-api-0\" (UID: \"639be1ac-d286-4b83-9dac-e60115db84d8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.099658 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78ztk\" (UniqueName: \"kubernetes.io/projected/b6ca7f24-6f14-40d8-9450-a93b06a21aad-kube-api-access-78ztk\") pod \"nova-kuttl-metadata-0\" (UID: \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.099786 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6ca7f24-6f14-40d8-9450-a93b06a21aad-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.099863 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6ca7f24-6f14-40d8-9450-a93b06a21aad-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.099906 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/639be1ac-d286-4b83-9dac-e60115db84d8-logs\") pod \"nova-kuttl-api-0\" (UID: \"639be1ac-d286-4b83-9dac-e60115db84d8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.103925 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/639be1ac-d286-4b83-9dac-e60115db84d8-logs\") pod \"nova-kuttl-api-0\" (UID: \"639be1ac-d286-4b83-9dac-e60115db84d8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.106947 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639be1ac-d286-4b83-9dac-e60115db84d8-config-data\") pod \"nova-kuttl-api-0\" (UID: \"639be1ac-d286-4b83-9dac-e60115db84d8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.116954 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4gvn\" (UniqueName: \"kubernetes.io/projected/639be1ac-d286-4b83-9dac-e60115db84d8-kube-api-access-q4gvn\") pod \"nova-kuttl-api-0\" (UID: \"639be1ac-d286-4b83-9dac-e60115db84d8\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.201621 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78ztk\" (UniqueName: \"kubernetes.io/projected/b6ca7f24-6f14-40d8-9450-a93b06a21aad-kube-api-access-78ztk\") pod \"nova-kuttl-metadata-0\" (UID: \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.201761 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6ca7f24-6f14-40d8-9450-a93b06a21aad-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.201822 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6ca7f24-6f14-40d8-9450-a93b06a21aad-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.202333 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6ca7f24-6f14-40d8-9450-a93b06a21aad-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.206219 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6ca7f24-6f14-40d8-9450-a93b06a21aad-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.222221 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78ztk\" (UniqueName: \"kubernetes.io/projected/b6ca7f24-6f14-40d8-9450-a93b06a21aad-kube-api-access-78ztk\") pod \"nova-kuttl-metadata-0\" (UID: \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.274239 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.285598 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.302301 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6m5n\" (UniqueName: \"kubernetes.io/projected/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-kube-api-access-k6m5n\") pod \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\" (UID: \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\") " Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.302353 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-config-data\") pod \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\" (UID: \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\") " Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.302513 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-scripts\") pod \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\" (UID: \"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33\") " Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.310213 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-kube-api-access-k6m5n" (OuterVolumeSpecName: "kube-api-access-k6m5n") pod "2b6fcfff-1e42-4851-a4c9-7a55f8c02a33" (UID: "2b6fcfff-1e42-4851-a4c9-7a55f8c02a33"). InnerVolumeSpecName "kube-api-access-k6m5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.310318 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-scripts" (OuterVolumeSpecName: "scripts") pod "2b6fcfff-1e42-4851-a4c9-7a55f8c02a33" (UID: "2b6fcfff-1e42-4851-a4c9-7a55f8c02a33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.324975 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-config-data" (OuterVolumeSpecName: "config-data") pod "2b6fcfff-1e42-4851-a4c9-7a55f8c02a33" (UID: "2b6fcfff-1e42-4851-a4c9-7a55f8c02a33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.346230 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.405062 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.405102 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6m5n\" (UniqueName: \"kubernetes.io/projected/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-kube-api-access-k6m5n\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:19 crc kubenswrapper[4893]: I0128 15:32:19.405117 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:19.875215 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" event={"ID":"2b6fcfff-1e42-4851-a4c9-7a55f8c02a33","Type":"ContainerDied","Data":"efe6770280054f49da87b112be659c1c838bd8b23ccd24deb3545d88675d9411"} Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:19.875599 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efe6770280054f49da87b112be659c1c838bd8b23ccd24deb3545d88675d9411" Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:19.875695 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52" Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:19.880991 4893 generic.go:334] "Generic (PLEG): container finished" podID="f1852e80-6c82-46a4-ba72-3cf5f0b598fc" containerID="602abcc6e79fcbcaa5329770a3c8a5b0f1e8c1be470c3e597a1bd6410301e0f9" exitCode=0 Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:19.881037 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"f1852e80-6c82-46a4-ba72-3cf5f0b598fc","Type":"ContainerDied","Data":"602abcc6e79fcbcaa5329770a3c8a5b0f1e8c1be470c3e597a1bd6410301e0f9"} Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.044558 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-x5kjk"] Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.054026 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-x5kjk"] Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.063583 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.150059 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.842056 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.846501 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.892750 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:32:20 crc kubenswrapper[4893]: E0128 15:32:20.893341 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.905070 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.911974 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a83a0b-ab25-4eaa-90d5-054bdddfadc8" path="/var/lib/kubelet/pods/74a83a0b-ab25-4eaa-90d5-054bdddfadc8/volumes" Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.913021 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d052a7a9-b356-4a86-aa5c-ec11b107b922" path="/var/lib/kubelet/pods/d052a7a9-b356-4a86-aa5c-ec11b107b922/volumes" Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.913660 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd2f963f-4ec9-4dfb-adb2-78604d438dfb" path="/var/lib/kubelet/pods/dd2f963f-4ec9-4dfb-adb2-78604d438dfb/volumes" Jan 28 15:32:20 crc kubenswrapper[4893]: W0128 15:32:20.938968 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6ca7f24_6f14_40d8_9450_a93b06a21aad.slice/crio-d389e4f39da78072a34b7ab1d32608680c3dd429aef71b0b172682f4ab1b0091 WatchSource:0}: Error finding container d389e4f39da78072a34b7ab1d32608680c3dd429aef71b0b172682f4ab1b0091: Status 404 returned error can't find the container with id d389e4f39da78072a34b7ab1d32608680c3dd429aef71b0b172682f4ab1b0091 Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.944704 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.944732 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"f1852e80-6c82-46a4-ba72-3cf5f0b598fc","Type":"ContainerDied","Data":"a112140f5e442f77f2ffeaf0749418b3ee622f74760b1b99c2b977dd14a47a87"} Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.944763 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"639be1ac-d286-4b83-9dac-e60115db84d8","Type":"ContainerStarted","Data":"14de1c16a2d152ae87ea9950450cb28e851f096c8525ab476c0236d69df9bec9"} Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.944788 4893 scope.go:117] "RemoveContainer" containerID="602abcc6e79fcbcaa5329770a3c8a5b0f1e8c1be470c3e597a1bd6410301e0f9" Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.947977 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mgf4\" (UniqueName: \"kubernetes.io/projected/f1852e80-6c82-46a4-ba72-3cf5f0b598fc-kube-api-access-2mgf4\") pod \"f1852e80-6c82-46a4-ba72-3cf5f0b598fc\" (UID: \"f1852e80-6c82-46a4-ba72-3cf5f0b598fc\") " Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.948628 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1852e80-6c82-46a4-ba72-3cf5f0b598fc-config-data\") pod \"f1852e80-6c82-46a4-ba72-3cf5f0b598fc\" (UID: \"f1852e80-6c82-46a4-ba72-3cf5f0b598fc\") " Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.955647 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1852e80-6c82-46a4-ba72-3cf5f0b598fc-kube-api-access-2mgf4" (OuterVolumeSpecName: "kube-api-access-2mgf4") pod "f1852e80-6c82-46a4-ba72-3cf5f0b598fc" (UID: "f1852e80-6c82-46a4-ba72-3cf5f0b598fc"). InnerVolumeSpecName "kube-api-access-2mgf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:20 crc kubenswrapper[4893]: I0128 15:32:20.983306 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1852e80-6c82-46a4-ba72-3cf5f0b598fc-config-data" (OuterVolumeSpecName: "config-data") pod "f1852e80-6c82-46a4-ba72-3cf5f0b598fc" (UID: "f1852e80-6c82-46a4-ba72-3cf5f0b598fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.050196 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mgf4\" (UniqueName: \"kubernetes.io/projected/f1852e80-6c82-46a4-ba72-3cf5f0b598fc-kube-api-access-2mgf4\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.050236 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1852e80-6c82-46a4-ba72-3cf5f0b598fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.248994 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.268555 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.275201 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:32:21 crc kubenswrapper[4893]: E0128 15:32:21.275841 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1852e80-6c82-46a4-ba72-3cf5f0b598fc" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.275864 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1852e80-6c82-46a4-ba72-3cf5f0b598fc" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:32:21 crc kubenswrapper[4893]: E0128 15:32:21.275883 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6fcfff-1e42-4851-a4c9-7a55f8c02a33" containerName="nova-manage" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.275892 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6fcfff-1e42-4851-a4c9-7a55f8c02a33" containerName="nova-manage" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.276078 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1852e80-6c82-46a4-ba72-3cf5f0b598fc" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.276098 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b6fcfff-1e42-4851-a4c9-7a55f8c02a33" containerName="nova-manage" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.277001 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.280296 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.283806 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.467266 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jndr9\" (UniqueName: \"kubernetes.io/projected/b03317ea-e576-4079-9676-713f7767d401-kube-api-access-jndr9\") pod \"nova-kuttl-scheduler-0\" (UID: \"b03317ea-e576-4079-9676-713f7767d401\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.468043 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b03317ea-e576-4079-9676-713f7767d401-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"b03317ea-e576-4079-9676-713f7767d401\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.571052 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jndr9\" (UniqueName: \"kubernetes.io/projected/b03317ea-e576-4079-9676-713f7767d401-kube-api-access-jndr9\") pod \"nova-kuttl-scheduler-0\" (UID: \"b03317ea-e576-4079-9676-713f7767d401\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.571158 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b03317ea-e576-4079-9676-713f7767d401-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"b03317ea-e576-4079-9676-713f7767d401\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.590548 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b03317ea-e576-4079-9676-713f7767d401-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"b03317ea-e576-4079-9676-713f7767d401\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.593447 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jndr9\" (UniqueName: \"kubernetes.io/projected/b03317ea-e576-4079-9676-713f7767d401-kube-api-access-jndr9\") pod \"nova-kuttl-scheduler-0\" (UID: \"b03317ea-e576-4079-9676-713f7767d401\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.596392 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.930741 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"b6ca7f24-6f14-40d8-9450-a93b06a21aad","Type":"ContainerStarted","Data":"2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429"} Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.930784 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"b6ca7f24-6f14-40d8-9450-a93b06a21aad","Type":"ContainerStarted","Data":"ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278"} Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.930984 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"b6ca7f24-6f14-40d8-9450-a93b06a21aad","Type":"ContainerStarted","Data":"d389e4f39da78072a34b7ab1d32608680c3dd429aef71b0b172682f4ab1b0091"} Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.931101 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="b6ca7f24-6f14-40d8-9450-a93b06a21aad" containerName="nova-kuttl-metadata-log" containerID="cri-o://ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278" gracePeriod=30 Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.931369 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="b6ca7f24-6f14-40d8-9450-a93b06a21aad" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429" gracePeriod=30 Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.939339 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"639be1ac-d286-4b83-9dac-e60115db84d8","Type":"ContainerStarted","Data":"fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552"} Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.939382 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"639be1ac-d286-4b83-9dac-e60115db84d8","Type":"ContainerStarted","Data":"43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5"} Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.939622 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="639be1ac-d286-4b83-9dac-e60115db84d8" containerName="nova-kuttl-api-log" containerID="cri-o://43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5" gracePeriod=30 Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.939763 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="639be1ac-d286-4b83-9dac-e60115db84d8" containerName="nova-kuttl-api-api" containerID="cri-o://fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552" gracePeriod=30 Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.956312 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=3.95628302 podStartE2EDuration="3.95628302s" podCreationTimestamp="2026-01-28 15:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:21.956046143 +0000 UTC m=+1859.729661171" watchObservedRunningTime="2026-01-28 15:32:21.95628302 +0000 UTC m=+1859.729898048" Jan 28 15:32:21 crc kubenswrapper[4893]: I0128 15:32:21.978407 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=3.97838487 podStartE2EDuration="3.97838487s" podCreationTimestamp="2026-01-28 15:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:21.976771777 +0000 UTC m=+1859.750386805" watchObservedRunningTime="2026-01-28 15:32:21.97838487 +0000 UTC m=+1859.751999898" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.022204 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:32:22 crc kubenswrapper[4893]: E0128 15:32:22.320320 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6ca7f24_6f14_40d8_9450_a93b06a21aad.slice/crio-conmon-2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.627335 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.641081 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.796658 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/639be1ac-d286-4b83-9dac-e60115db84d8-logs\") pod \"639be1ac-d286-4b83-9dac-e60115db84d8\" (UID: \"639be1ac-d286-4b83-9dac-e60115db84d8\") " Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.796747 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6ca7f24-6f14-40d8-9450-a93b06a21aad-config-data\") pod \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\" (UID: \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\") " Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.796797 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639be1ac-d286-4b83-9dac-e60115db84d8-config-data\") pod \"639be1ac-d286-4b83-9dac-e60115db84d8\" (UID: \"639be1ac-d286-4b83-9dac-e60115db84d8\") " Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.796833 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4gvn\" (UniqueName: \"kubernetes.io/projected/639be1ac-d286-4b83-9dac-e60115db84d8-kube-api-access-q4gvn\") pod \"639be1ac-d286-4b83-9dac-e60115db84d8\" (UID: \"639be1ac-d286-4b83-9dac-e60115db84d8\") " Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.796878 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6ca7f24-6f14-40d8-9450-a93b06a21aad-logs\") pod \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\" (UID: \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\") " Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.796949 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78ztk\" (UniqueName: \"kubernetes.io/projected/b6ca7f24-6f14-40d8-9450-a93b06a21aad-kube-api-access-78ztk\") pod \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\" (UID: \"b6ca7f24-6f14-40d8-9450-a93b06a21aad\") " Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.797404 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6ca7f24-6f14-40d8-9450-a93b06a21aad-logs" (OuterVolumeSpecName: "logs") pod "b6ca7f24-6f14-40d8-9450-a93b06a21aad" (UID: "b6ca7f24-6f14-40d8-9450-a93b06a21aad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.798219 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/639be1ac-d286-4b83-9dac-e60115db84d8-logs" (OuterVolumeSpecName: "logs") pod "639be1ac-d286-4b83-9dac-e60115db84d8" (UID: "639be1ac-d286-4b83-9dac-e60115db84d8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.801430 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6ca7f24-6f14-40d8-9450-a93b06a21aad-kube-api-access-78ztk" (OuterVolumeSpecName: "kube-api-access-78ztk") pod "b6ca7f24-6f14-40d8-9450-a93b06a21aad" (UID: "b6ca7f24-6f14-40d8-9450-a93b06a21aad"). InnerVolumeSpecName "kube-api-access-78ztk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.802307 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/639be1ac-d286-4b83-9dac-e60115db84d8-kube-api-access-q4gvn" (OuterVolumeSpecName: "kube-api-access-q4gvn") pod "639be1ac-d286-4b83-9dac-e60115db84d8" (UID: "639be1ac-d286-4b83-9dac-e60115db84d8"). InnerVolumeSpecName "kube-api-access-q4gvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.820018 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6ca7f24-6f14-40d8-9450-a93b06a21aad-config-data" (OuterVolumeSpecName: "config-data") pod "b6ca7f24-6f14-40d8-9450-a93b06a21aad" (UID: "b6ca7f24-6f14-40d8-9450-a93b06a21aad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.820062 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/639be1ac-d286-4b83-9dac-e60115db84d8-config-data" (OuterVolumeSpecName: "config-data") pod "639be1ac-d286-4b83-9dac-e60115db84d8" (UID: "639be1ac-d286-4b83-9dac-e60115db84d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.898995 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/639be1ac-d286-4b83-9dac-e60115db84d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.899034 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4gvn\" (UniqueName: \"kubernetes.io/projected/639be1ac-d286-4b83-9dac-e60115db84d8-kube-api-access-q4gvn\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.899047 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6ca7f24-6f14-40d8-9450-a93b06a21aad-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.899058 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78ztk\" (UniqueName: \"kubernetes.io/projected/b6ca7f24-6f14-40d8-9450-a93b06a21aad-kube-api-access-78ztk\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.899069 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/639be1ac-d286-4b83-9dac-e60115db84d8-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.899079 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6ca7f24-6f14-40d8-9450-a93b06a21aad-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.904911 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1852e80-6c82-46a4-ba72-3cf5f0b598fc" path="/var/lib/kubelet/pods/f1852e80-6c82-46a4-ba72-3cf5f0b598fc/volumes" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.952568 4893 generic.go:334] "Generic (PLEG): container finished" podID="b6ca7f24-6f14-40d8-9450-a93b06a21aad" containerID="2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429" exitCode=0 Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.952612 4893 generic.go:334] "Generic (PLEG): container finished" podID="b6ca7f24-6f14-40d8-9450-a93b06a21aad" containerID="ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278" exitCode=143 Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.952642 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.952671 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"b6ca7f24-6f14-40d8-9450-a93b06a21aad","Type":"ContainerDied","Data":"2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429"} Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.952704 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"b6ca7f24-6f14-40d8-9450-a93b06a21aad","Type":"ContainerDied","Data":"ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278"} Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.952716 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"b6ca7f24-6f14-40d8-9450-a93b06a21aad","Type":"ContainerDied","Data":"d389e4f39da78072a34b7ab1d32608680c3dd429aef71b0b172682f4ab1b0091"} Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.952734 4893 scope.go:117] "RemoveContainer" containerID="2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.956645 4893 generic.go:334] "Generic (PLEG): container finished" podID="639be1ac-d286-4b83-9dac-e60115db84d8" containerID="fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552" exitCode=0 Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.956679 4893 generic.go:334] "Generic (PLEG): container finished" podID="639be1ac-d286-4b83-9dac-e60115db84d8" containerID="43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5" exitCode=143 Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.956734 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"639be1ac-d286-4b83-9dac-e60115db84d8","Type":"ContainerDied","Data":"fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552"} Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.956760 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"639be1ac-d286-4b83-9dac-e60115db84d8","Type":"ContainerDied","Data":"43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5"} Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.956770 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"639be1ac-d286-4b83-9dac-e60115db84d8","Type":"ContainerDied","Data":"14de1c16a2d152ae87ea9950450cb28e851f096c8525ab476c0236d69df9bec9"} Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.956818 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.960374 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"b03317ea-e576-4079-9676-713f7767d401","Type":"ContainerStarted","Data":"a86d34e89d158a6e7763c5a89689a859c0bb3a9f351803ae46c71d82f45c0126"} Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.960410 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"b03317ea-e576-4079-9676-713f7767d401","Type":"ContainerStarted","Data":"eb35e8b8889595f36ca1551c79a2da296cec5dfe1f8f7c5e6721d5f4ff691e97"} Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.980686 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:22 crc kubenswrapper[4893]: I0128 15:32:22.989372 4893 scope.go:117] "RemoveContainer" containerID="ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.000360 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.042539 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.042605 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.044708 4893 scope.go:117] "RemoveContainer" containerID="2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429" Jan 28 15:32:23 crc kubenswrapper[4893]: E0128 15:32:23.048844 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429\": container with ID starting with 2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429 not found: ID does not exist" containerID="2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.048880 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429"} err="failed to get container status \"2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429\": rpc error: code = NotFound desc = could not find container \"2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429\": container with ID starting with 2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429 not found: ID does not exist" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.048902 4893 scope.go:117] "RemoveContainer" containerID="ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278" Jan 28 15:32:23 crc kubenswrapper[4893]: E0128 15:32:23.049195 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278\": container with ID starting with ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278 not found: ID does not exist" containerID="ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.049218 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278"} err="failed to get container status \"ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278\": rpc error: code = NotFound desc = could not find container \"ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278\": container with ID starting with ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278 not found: ID does not exist" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.049232 4893 scope.go:117] "RemoveContainer" containerID="2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.049445 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429"} err="failed to get container status \"2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429\": rpc error: code = NotFound desc = could not find container \"2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429\": container with ID starting with 2a331c24ea4fff39888a36a6e8fafa1ba4e28618ce42cca8380d165b93906429 not found: ID does not exist" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.049462 4893 scope.go:117] "RemoveContainer" containerID="ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.049690 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278"} err="failed to get container status \"ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278\": rpc error: code = NotFound desc = could not find container \"ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278\": container with ID starting with ea9b39365a26abc7eaa07d0c215b3a671d8b0f221fc5c8fd86387ba1d1682278 not found: ID does not exist" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.049707 4893 scope.go:117] "RemoveContainer" containerID="fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.055066 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:23 crc kubenswrapper[4893]: E0128 15:32:23.055408 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ca7f24-6f14-40d8-9450-a93b06a21aad" containerName="nova-kuttl-metadata-metadata" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.055426 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ca7f24-6f14-40d8-9450-a93b06a21aad" containerName="nova-kuttl-metadata-metadata" Jan 28 15:32:23 crc kubenswrapper[4893]: E0128 15:32:23.055436 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639be1ac-d286-4b83-9dac-e60115db84d8" containerName="nova-kuttl-api-log" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.055442 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="639be1ac-d286-4b83-9dac-e60115db84d8" containerName="nova-kuttl-api-log" Jan 28 15:32:23 crc kubenswrapper[4893]: E0128 15:32:23.055453 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ca7f24-6f14-40d8-9450-a93b06a21aad" containerName="nova-kuttl-metadata-log" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.055460 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ca7f24-6f14-40d8-9450-a93b06a21aad" containerName="nova-kuttl-metadata-log" Jan 28 15:32:23 crc kubenswrapper[4893]: E0128 15:32:23.055499 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639be1ac-d286-4b83-9dac-e60115db84d8" containerName="nova-kuttl-api-api" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.055505 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="639be1ac-d286-4b83-9dac-e60115db84d8" containerName="nova-kuttl-api-api" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.055646 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="639be1ac-d286-4b83-9dac-e60115db84d8" containerName="nova-kuttl-api-api" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.055658 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6ca7f24-6f14-40d8-9450-a93b06a21aad" containerName="nova-kuttl-metadata-metadata" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.055670 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6ca7f24-6f14-40d8-9450-a93b06a21aad" containerName="nova-kuttl-metadata-log" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.055683 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="639be1ac-d286-4b83-9dac-e60115db84d8" containerName="nova-kuttl-api-log" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.056806 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.056784309 podStartE2EDuration="2.056784309s" podCreationTimestamp="2026-01-28 15:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:23.012871855 +0000 UTC m=+1860.786486893" watchObservedRunningTime="2026-01-28 15:32:23.056784309 +0000 UTC m=+1860.830399337" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.070450 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.075859 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.078983 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.084556 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.094780 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.096695 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.096778 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.121332 4893 scope.go:117] "RemoveContainer" containerID="43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.139758 4893 scope.go:117] "RemoveContainer" containerID="fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552" Jan 28 15:32:23 crc kubenswrapper[4893]: E0128 15:32:23.140168 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552\": container with ID starting with fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552 not found: ID does not exist" containerID="fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.140226 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552"} err="failed to get container status \"fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552\": rpc error: code = NotFound desc = could not find container \"fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552\": container with ID starting with fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552 not found: ID does not exist" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.140257 4893 scope.go:117] "RemoveContainer" containerID="43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5" Jan 28 15:32:23 crc kubenswrapper[4893]: E0128 15:32:23.140716 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5\": container with ID starting with 43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5 not found: ID does not exist" containerID="43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.140746 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5"} err="failed to get container status \"43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5\": rpc error: code = NotFound desc = could not find container \"43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5\": container with ID starting with 43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5 not found: ID does not exist" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.140771 4893 scope.go:117] "RemoveContainer" containerID="fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.141011 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552"} err="failed to get container status \"fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552\": rpc error: code = NotFound desc = could not find container \"fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552\": container with ID starting with fd7d3f8d14d5fba0b8e4e22c56078ee174fc63da79860b777f6547e998d66552 not found: ID does not exist" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.141032 4893 scope.go:117] "RemoveContainer" containerID="43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.141304 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5"} err="failed to get container status \"43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5\": rpc error: code = NotFound desc = could not find container \"43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5\": container with ID starting with 43e2f3de8aac44b573e671a9319da89519104a33c2e68477f08aea9615681bc5 not found: ID does not exist" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.202975 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j9wl\" (UniqueName: \"kubernetes.io/projected/284958c1-ea60-44c0-8868-f881dd64f745-kube-api-access-7j9wl\") pod \"nova-kuttl-metadata-0\" (UID: \"284958c1-ea60-44c0-8868-f881dd64f745\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.203032 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/284958c1-ea60-44c0-8868-f881dd64f745-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"284958c1-ea60-44c0-8868-f881dd64f745\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.203053 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284958c1-ea60-44c0-8868-f881dd64f745-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"284958c1-ea60-44c0-8868-f881dd64f745\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.203073 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e23591d-6753-4d56-b350-d1a802713c45-logs\") pod \"nova-kuttl-api-0\" (UID: \"9e23591d-6753-4d56-b350-d1a802713c45\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.203134 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e23591d-6753-4d56-b350-d1a802713c45-config-data\") pod \"nova-kuttl-api-0\" (UID: \"9e23591d-6753-4d56-b350-d1a802713c45\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.203153 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmpr2\" (UniqueName: \"kubernetes.io/projected/9e23591d-6753-4d56-b350-d1a802713c45-kube-api-access-gmpr2\") pod \"nova-kuttl-api-0\" (UID: \"9e23591d-6753-4d56-b350-d1a802713c45\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.304990 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j9wl\" (UniqueName: \"kubernetes.io/projected/284958c1-ea60-44c0-8868-f881dd64f745-kube-api-access-7j9wl\") pod \"nova-kuttl-metadata-0\" (UID: \"284958c1-ea60-44c0-8868-f881dd64f745\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.305054 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/284958c1-ea60-44c0-8868-f881dd64f745-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"284958c1-ea60-44c0-8868-f881dd64f745\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.305090 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284958c1-ea60-44c0-8868-f881dd64f745-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"284958c1-ea60-44c0-8868-f881dd64f745\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.305126 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e23591d-6753-4d56-b350-d1a802713c45-logs\") pod \"nova-kuttl-api-0\" (UID: \"9e23591d-6753-4d56-b350-d1a802713c45\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.305238 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e23591d-6753-4d56-b350-d1a802713c45-config-data\") pod \"nova-kuttl-api-0\" (UID: \"9e23591d-6753-4d56-b350-d1a802713c45\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.305918 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmpr2\" (UniqueName: \"kubernetes.io/projected/9e23591d-6753-4d56-b350-d1a802713c45-kube-api-access-gmpr2\") pod \"nova-kuttl-api-0\" (UID: \"9e23591d-6753-4d56-b350-d1a802713c45\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.305842 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e23591d-6753-4d56-b350-d1a802713c45-logs\") pod \"nova-kuttl-api-0\" (UID: \"9e23591d-6753-4d56-b350-d1a802713c45\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.305713 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/284958c1-ea60-44c0-8868-f881dd64f745-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"284958c1-ea60-44c0-8868-f881dd64f745\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.310514 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e23591d-6753-4d56-b350-d1a802713c45-config-data\") pod \"nova-kuttl-api-0\" (UID: \"9e23591d-6753-4d56-b350-d1a802713c45\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.317679 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284958c1-ea60-44c0-8868-f881dd64f745-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"284958c1-ea60-44c0-8868-f881dd64f745\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.322547 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j9wl\" (UniqueName: \"kubernetes.io/projected/284958c1-ea60-44c0-8868-f881dd64f745-kube-api-access-7j9wl\") pod \"nova-kuttl-metadata-0\" (UID: \"284958c1-ea60-44c0-8868-f881dd64f745\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.323033 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmpr2\" (UniqueName: \"kubernetes.io/projected/9e23591d-6753-4d56-b350-d1a802713c45-kube-api-access-gmpr2\") pod \"nova-kuttl-api-0\" (UID: \"9e23591d-6753-4d56-b350-d1a802713c45\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.419434 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.430335 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.870458 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:23 crc kubenswrapper[4893]: W0128 15:32:23.872084 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e23591d_6753_4d56_b350_d1a802713c45.slice/crio-d4d92f6fc50bdda1be2b6fef0f0a36140f0c511f153650e3fc70a84a114422e7 WatchSource:0}: Error finding container d4d92f6fc50bdda1be2b6fef0f0a36140f0c511f153650e3fc70a84a114422e7: Status 404 returned error can't find the container with id d4d92f6fc50bdda1be2b6fef0f0a36140f0c511f153650e3fc70a84a114422e7 Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.950936 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:32:23 crc kubenswrapper[4893]: W0128 15:32:23.952345 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod284958c1_ea60_44c0_8868_f881dd64f745.slice/crio-b8db4ab2aeb8486a6ef5730f3aa08b006c2d1590b5bfb678903141a477949810 WatchSource:0}: Error finding container b8db4ab2aeb8486a6ef5730f3aa08b006c2d1590b5bfb678903141a477949810: Status 404 returned error can't find the container with id b8db4ab2aeb8486a6ef5730f3aa08b006c2d1590b5bfb678903141a477949810 Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.971785 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"9e23591d-6753-4d56-b350-d1a802713c45","Type":"ContainerStarted","Data":"d4d92f6fc50bdda1be2b6fef0f0a36140f0c511f153650e3fc70a84a114422e7"} Jan 28 15:32:23 crc kubenswrapper[4893]: I0128 15:32:23.975010 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"284958c1-ea60-44c0-8868-f881dd64f745","Type":"ContainerStarted","Data":"b8db4ab2aeb8486a6ef5730f3aa08b006c2d1590b5bfb678903141a477949810"} Jan 28 15:32:24 crc kubenswrapper[4893]: I0128 15:32:24.905646 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="639be1ac-d286-4b83-9dac-e60115db84d8" path="/var/lib/kubelet/pods/639be1ac-d286-4b83-9dac-e60115db84d8/volumes" Jan 28 15:32:24 crc kubenswrapper[4893]: I0128 15:32:24.908279 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6ca7f24-6f14-40d8-9450-a93b06a21aad" path="/var/lib/kubelet/pods/b6ca7f24-6f14-40d8-9450-a93b06a21aad/volumes" Jan 28 15:32:24 crc kubenswrapper[4893]: I0128 15:32:24.984187 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"9e23591d-6753-4d56-b350-d1a802713c45","Type":"ContainerStarted","Data":"11cbbe9184808e4165fbf602f2c8f8fbd46df3fd353cffbc6f3e9d77f0b9adfb"} Jan 28 15:32:24 crc kubenswrapper[4893]: I0128 15:32:24.984234 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"9e23591d-6753-4d56-b350-d1a802713c45","Type":"ContainerStarted","Data":"a9c0f3889b2c6c23d376a482e84f7f3b72063811564e6734fcccc67dbc125805"} Jan 28 15:32:24 crc kubenswrapper[4893]: I0128 15:32:24.987274 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"284958c1-ea60-44c0-8868-f881dd64f745","Type":"ContainerStarted","Data":"51ef8d4ae1d8eaa85ec5c2b99665d02c22f674d6ec204f75181977478742c68c"} Jan 28 15:32:24 crc kubenswrapper[4893]: I0128 15:32:24.987315 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"284958c1-ea60-44c0-8868-f881dd64f745","Type":"ContainerStarted","Data":"5046c02f842d9c5a353eba7ca7e95946986aaff797c1c0eb3b4f884d08f9341a"} Jan 28 15:32:25 crc kubenswrapper[4893]: I0128 15:32:25.001912 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=3.001871542 podStartE2EDuration="3.001871542s" podCreationTimestamp="2026-01-28 15:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:25.001360519 +0000 UTC m=+1862.774975557" watchObservedRunningTime="2026-01-28 15:32:25.001871542 +0000 UTC m=+1862.775486570" Jan 28 15:32:25 crc kubenswrapper[4893]: I0128 15:32:25.023871 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=3.02385272 podStartE2EDuration="3.02385272s" podCreationTimestamp="2026-01-28 15:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:32:25.021398193 +0000 UTC m=+1862.795013241" watchObservedRunningTime="2026-01-28 15:32:25.02385272 +0000 UTC m=+1862.797467748" Jan 28 15:32:26 crc kubenswrapper[4893]: I0128 15:32:26.596957 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:28 crc kubenswrapper[4893]: I0128 15:32:28.419988 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:28 crc kubenswrapper[4893]: I0128 15:32:28.420053 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:31 crc kubenswrapper[4893]: I0128 15:32:31.171970 4893 scope.go:117] "RemoveContainer" containerID="a30b617dbd75a75d68cda3c363008fea48b22f9f9aa2d3535c4a11d0bbee2a4d" Jan 28 15:32:31 crc kubenswrapper[4893]: I0128 15:32:31.205521 4893 scope.go:117] "RemoveContainer" containerID="4b02e3c8525e8f4527efaea187ff34f903a0ea0fedae6e9491ccaf649d44808e" Jan 28 15:32:31 crc kubenswrapper[4893]: I0128 15:32:31.249924 4893 scope.go:117] "RemoveContainer" containerID="b40b81269e6b5fcf655bb9fbc81abfab97ed34a4392813fc8bc15ae71afaa3c7" Jan 28 15:32:31 crc kubenswrapper[4893]: I0128 15:32:31.304003 4893 scope.go:117] "RemoveContainer" containerID="92be6fde6a54fcad0a44b8eacc66f7d3fd5537886a3a56d69f335cd2d9615fbf" Jan 28 15:32:31 crc kubenswrapper[4893]: I0128 15:32:31.597153 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:31 crc kubenswrapper[4893]: I0128 15:32:31.623682 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:32 crc kubenswrapper[4893]: I0128 15:32:32.096176 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:32:32 crc kubenswrapper[4893]: I0128 15:32:32.892720 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:32:32 crc kubenswrapper[4893]: E0128 15:32:32.893296 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:32:33 crc kubenswrapper[4893]: I0128 15:32:33.419836 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:33 crc kubenswrapper[4893]: I0128 15:32:33.420394 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:33 crc kubenswrapper[4893]: I0128 15:32:33.431067 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:33 crc kubenswrapper[4893]: I0128 15:32:33.431348 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:34 crc kubenswrapper[4893]: I0128 15:32:34.585798 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="284958c1-ea60-44c0-8868-f881dd64f745" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.201:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:32:34 crc kubenswrapper[4893]: I0128 15:32:34.585846 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="9e23591d-6753-4d56-b350-d1a802713c45" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:32:34 crc kubenswrapper[4893]: I0128 15:32:34.585813 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="9e23591d-6753-4d56-b350-d1a802713c45" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:32:34 crc kubenswrapper[4893]: I0128 15:32:34.585798 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="284958c1-ea60-44c0-8868-f881dd64f745" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.201:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:32:43 crc kubenswrapper[4893]: I0128 15:32:43.422933 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:43 crc kubenswrapper[4893]: I0128 15:32:43.423490 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:43 crc kubenswrapper[4893]: I0128 15:32:43.427168 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:43 crc kubenswrapper[4893]: I0128 15:32:43.427660 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:32:43 crc kubenswrapper[4893]: I0128 15:32:43.435490 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:43 crc kubenswrapper[4893]: I0128 15:32:43.435924 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:43 crc kubenswrapper[4893]: I0128 15:32:43.439219 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:43 crc kubenswrapper[4893]: I0128 15:32:43.440340 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:44 crc kubenswrapper[4893]: I0128 15:32:44.179557 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:44 crc kubenswrapper[4893]: I0128 15:32:44.184172 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:32:45 crc kubenswrapper[4893]: I0128 15:32:45.891797 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:32:45 crc kubenswrapper[4893]: E0128 15:32:45.892276 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:32:57 crc kubenswrapper[4893]: I0128 15:32:57.892075 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:32:57 crc kubenswrapper[4893]: E0128 15:32:57.892950 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:32:58 crc kubenswrapper[4893]: I0128 15:32:58.830765 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:32:58 crc kubenswrapper[4893]: I0128 15:32:58.831422 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="06f31fa8-9788-45e6-b347-f7f697e29075" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7" gracePeriod=30 Jan 28 15:32:58 crc kubenswrapper[4893]: I0128 15:32:58.922781 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 15:32:58 crc kubenswrapper[4893]: I0128 15:32:58.924138 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="052e4427-b04a-4a64-80de-5186db93716f" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" containerID="cri-o://043197cad6de7a3f7f019720e2ceab1229aac5d150e1a6cf5e7519001a2b0e32" gracePeriod=30 Jan 28 15:32:58 crc kubenswrapper[4893]: I0128 15:32:58.931883 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:32:58 crc kubenswrapper[4893]: I0128 15:32:58.932225 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="b03317ea-e576-4079-9676-713f7767d401" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://a86d34e89d158a6e7763c5a89689a859c0bb3a9f351803ae46c71d82f45c0126" gracePeriod=30 Jan 28 15:32:58 crc kubenswrapper[4893]: I0128 15:32:58.946903 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:32:58 crc kubenswrapper[4893]: I0128 15:32:58.947509 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="9e23591d-6753-4d56-b350-d1a802713c45" containerName="nova-kuttl-api-log" containerID="cri-o://a9c0f3889b2c6c23d376a482e84f7f3b72063811564e6734fcccc67dbc125805" gracePeriod=30 Jan 28 15:32:58 crc kubenswrapper[4893]: I0128 15:32:58.947668 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="9e23591d-6753-4d56-b350-d1a802713c45" containerName="nova-kuttl-api-api" containerID="cri-o://11cbbe9184808e4165fbf602f2c8f8fbd46df3fd353cffbc6f3e9d77f0b9adfb" gracePeriod=30 Jan 28 15:32:59 crc kubenswrapper[4893]: I0128 15:32:59.326151 4893 generic.go:334] "Generic (PLEG): container finished" podID="9e23591d-6753-4d56-b350-d1a802713c45" containerID="a9c0f3889b2c6c23d376a482e84f7f3b72063811564e6734fcccc67dbc125805" exitCode=143 Jan 28 15:32:59 crc kubenswrapper[4893]: I0128 15:32:59.326227 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"9e23591d-6753-4d56-b350-d1a802713c45","Type":"ContainerDied","Data":"a9c0f3889b2c6c23d376a482e84f7f3b72063811564e6734fcccc67dbc125805"} Jan 28 15:33:00 crc kubenswrapper[4893]: E0128 15:33:00.039726 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:33:00 crc kubenswrapper[4893]: E0128 15:33:00.041276 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:33:00 crc kubenswrapper[4893]: E0128 15:33:00.042978 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:33:00 crc kubenswrapper[4893]: E0128 15:33:00.043024 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="06f31fa8-9788-45e6-b347-f7f697e29075" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:33:01 crc kubenswrapper[4893]: E0128 15:33:01.087725 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="043197cad6de7a3f7f019720e2ceab1229aac5d150e1a6cf5e7519001a2b0e32" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:01 crc kubenswrapper[4893]: E0128 15:33:01.089952 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="043197cad6de7a3f7f019720e2ceab1229aac5d150e1a6cf5e7519001a2b0e32" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:01 crc kubenswrapper[4893]: E0128 15:33:01.091524 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="043197cad6de7a3f7f019720e2ceab1229aac5d150e1a6cf5e7519001a2b0e32" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:01 crc kubenswrapper[4893]: E0128 15:33:01.091589 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="052e4427-b04a-4a64-80de-5186db93716f" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.123247 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.227618 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b03317ea-e576-4079-9676-713f7767d401-config-data\") pod \"b03317ea-e576-4079-9676-713f7767d401\" (UID: \"b03317ea-e576-4079-9676-713f7767d401\") " Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.227956 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jndr9\" (UniqueName: \"kubernetes.io/projected/b03317ea-e576-4079-9676-713f7767d401-kube-api-access-jndr9\") pod \"b03317ea-e576-4079-9676-713f7767d401\" (UID: \"b03317ea-e576-4079-9676-713f7767d401\") " Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.234668 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03317ea-e576-4079-9676-713f7767d401-kube-api-access-jndr9" (OuterVolumeSpecName: "kube-api-access-jndr9") pod "b03317ea-e576-4079-9676-713f7767d401" (UID: "b03317ea-e576-4079-9676-713f7767d401"). InnerVolumeSpecName "kube-api-access-jndr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.260148 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b03317ea-e576-4079-9676-713f7767d401-config-data" (OuterVolumeSpecName: "config-data") pod "b03317ea-e576-4079-9676-713f7767d401" (UID: "b03317ea-e576-4079-9676-713f7767d401"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.332189 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jndr9\" (UniqueName: \"kubernetes.io/projected/b03317ea-e576-4079-9676-713f7767d401-kube-api-access-jndr9\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.332628 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b03317ea-e576-4079-9676-713f7767d401-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.349392 4893 generic.go:334] "Generic (PLEG): container finished" podID="b03317ea-e576-4079-9676-713f7767d401" containerID="a86d34e89d158a6e7763c5a89689a859c0bb3a9f351803ae46c71d82f45c0126" exitCode=0 Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.349495 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"b03317ea-e576-4079-9676-713f7767d401","Type":"ContainerDied","Data":"a86d34e89d158a6e7763c5a89689a859c0bb3a9f351803ae46c71d82f45c0126"} Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.349561 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"b03317ea-e576-4079-9676-713f7767d401","Type":"ContainerDied","Data":"eb35e8b8889595f36ca1551c79a2da296cec5dfe1f8f7c5e6721d5f4ff691e97"} Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.349588 4893 scope.go:117] "RemoveContainer" containerID="a86d34e89d158a6e7763c5a89689a859c0bb3a9f351803ae46c71d82f45c0126" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.350023 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.382028 4893 scope.go:117] "RemoveContainer" containerID="a86d34e89d158a6e7763c5a89689a859c0bb3a9f351803ae46c71d82f45c0126" Jan 28 15:33:01 crc kubenswrapper[4893]: E0128 15:33:01.382900 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a86d34e89d158a6e7763c5a89689a859c0bb3a9f351803ae46c71d82f45c0126\": container with ID starting with a86d34e89d158a6e7763c5a89689a859c0bb3a9f351803ae46c71d82f45c0126 not found: ID does not exist" containerID="a86d34e89d158a6e7763c5a89689a859c0bb3a9f351803ae46c71d82f45c0126" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.382937 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a86d34e89d158a6e7763c5a89689a859c0bb3a9f351803ae46c71d82f45c0126"} err="failed to get container status \"a86d34e89d158a6e7763c5a89689a859c0bb3a9f351803ae46c71d82f45c0126\": rpc error: code = NotFound desc = could not find container \"a86d34e89d158a6e7763c5a89689a859c0bb3a9f351803ae46c71d82f45c0126\": container with ID starting with a86d34e89d158a6e7763c5a89689a859c0bb3a9f351803ae46c71d82f45c0126 not found: ID does not exist" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.396279 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.406140 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.432415 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:33:01 crc kubenswrapper[4893]: E0128 15:33:01.432930 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03317ea-e576-4079-9676-713f7767d401" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.432966 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03317ea-e576-4079-9676-713f7767d401" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.433193 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03317ea-e576-4079-9676-713f7767d401" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.433931 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.438671 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.446098 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.536016 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl8tf\" (UniqueName: \"kubernetes.io/projected/935219ee-1e14-4570-a0ab-a6794677e9d4-kube-api-access-nl8tf\") pod \"nova-kuttl-scheduler-0\" (UID: \"935219ee-1e14-4570-a0ab-a6794677e9d4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.536152 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/935219ee-1e14-4570-a0ab-a6794677e9d4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"935219ee-1e14-4570-a0ab-a6794677e9d4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.637993 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl8tf\" (UniqueName: \"kubernetes.io/projected/935219ee-1e14-4570-a0ab-a6794677e9d4-kube-api-access-nl8tf\") pod \"nova-kuttl-scheduler-0\" (UID: \"935219ee-1e14-4570-a0ab-a6794677e9d4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.638179 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/935219ee-1e14-4570-a0ab-a6794677e9d4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"935219ee-1e14-4570-a0ab-a6794677e9d4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.642674 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/935219ee-1e14-4570-a0ab-a6794677e9d4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"935219ee-1e14-4570-a0ab-a6794677e9d4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.657605 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl8tf\" (UniqueName: \"kubernetes.io/projected/935219ee-1e14-4570-a0ab-a6794677e9d4-kube-api-access-nl8tf\") pod \"nova-kuttl-scheduler-0\" (UID: \"935219ee-1e14-4570-a0ab-a6794677e9d4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:01 crc kubenswrapper[4893]: I0128 15:33:01.759598 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.143543 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.144146 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="f66cfadb-4c7f-455c-8625-9ae9c7d0d32d" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877" gracePeriod=30 Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.223819 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.360892 4893 generic.go:334] "Generic (PLEG): container finished" podID="9e23591d-6753-4d56-b350-d1a802713c45" containerID="11cbbe9184808e4165fbf602f2c8f8fbd46df3fd353cffbc6f3e9d77f0b9adfb" exitCode=0 Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.360950 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"9e23591d-6753-4d56-b350-d1a802713c45","Type":"ContainerDied","Data":"11cbbe9184808e4165fbf602f2c8f8fbd46df3fd353cffbc6f3e9d77f0b9adfb"} Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.362194 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"935219ee-1e14-4570-a0ab-a6794677e9d4","Type":"ContainerStarted","Data":"70b7886ea82db779aaf56f2546439325fa7957c4ba978728bc0a4aa2b7196468"} Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.485510 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.558736 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e23591d-6753-4d56-b350-d1a802713c45-logs\") pod \"9e23591d-6753-4d56-b350-d1a802713c45\" (UID: \"9e23591d-6753-4d56-b350-d1a802713c45\") " Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.558794 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e23591d-6753-4d56-b350-d1a802713c45-config-data\") pod \"9e23591d-6753-4d56-b350-d1a802713c45\" (UID: \"9e23591d-6753-4d56-b350-d1a802713c45\") " Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.558936 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmpr2\" (UniqueName: \"kubernetes.io/projected/9e23591d-6753-4d56-b350-d1a802713c45-kube-api-access-gmpr2\") pod \"9e23591d-6753-4d56-b350-d1a802713c45\" (UID: \"9e23591d-6753-4d56-b350-d1a802713c45\") " Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.560767 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e23591d-6753-4d56-b350-d1a802713c45-logs" (OuterVolumeSpecName: "logs") pod "9e23591d-6753-4d56-b350-d1a802713c45" (UID: "9e23591d-6753-4d56-b350-d1a802713c45"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.567722 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e23591d-6753-4d56-b350-d1a802713c45-kube-api-access-gmpr2" (OuterVolumeSpecName: "kube-api-access-gmpr2") pod "9e23591d-6753-4d56-b350-d1a802713c45" (UID: "9e23591d-6753-4d56-b350-d1a802713c45"). InnerVolumeSpecName "kube-api-access-gmpr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.586614 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e23591d-6753-4d56-b350-d1a802713c45-config-data" (OuterVolumeSpecName: "config-data") pod "9e23591d-6753-4d56-b350-d1a802713c45" (UID: "9e23591d-6753-4d56-b350-d1a802713c45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.661054 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e23591d-6753-4d56-b350-d1a802713c45-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.661096 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e23591d-6753-4d56-b350-d1a802713c45-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.661108 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmpr2\" (UniqueName: \"kubernetes.io/projected/9e23591d-6753-4d56-b350-d1a802713c45-kube-api-access-gmpr2\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:02 crc kubenswrapper[4893]: I0128 15:33:02.906026 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b03317ea-e576-4079-9676-713f7767d401" path="/var/lib/kubelet/pods/b03317ea-e576-4079-9676-713f7767d401/volumes" Jan 28 15:33:02 crc kubenswrapper[4893]: E0128 15:33:02.928146 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:33:02 crc kubenswrapper[4893]: E0128 15:33:02.931404 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:33:02 crc kubenswrapper[4893]: E0128 15:33:02.933828 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:33:02 crc kubenswrapper[4893]: E0128 15:33:02.933941 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="f66cfadb-4c7f-455c-8625-9ae9c7d0d32d" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:33:03 crc kubenswrapper[4893]: E0128 15:33:03.188825 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod052e4427_b04a_4a64_80de_5186db93716f.slice/crio-043197cad6de7a3f7f019720e2ceab1229aac5d150e1a6cf5e7519001a2b0e32.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod052e4427_b04a_4a64_80de_5186db93716f.slice/crio-conmon-043197cad6de7a3f7f019720e2ceab1229aac5d150e1a6cf5e7519001a2b0e32.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.374507 4893 generic.go:334] "Generic (PLEG): container finished" podID="052e4427-b04a-4a64-80de-5186db93716f" containerID="043197cad6de7a3f7f019720e2ceab1229aac5d150e1a6cf5e7519001a2b0e32" exitCode=0 Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.374662 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"052e4427-b04a-4a64-80de-5186db93716f","Type":"ContainerDied","Data":"043197cad6de7a3f7f019720e2ceab1229aac5d150e1a6cf5e7519001a2b0e32"} Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.377798 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"9e23591d-6753-4d56-b350-d1a802713c45","Type":"ContainerDied","Data":"d4d92f6fc50bdda1be2b6fef0f0a36140f0c511f153650e3fc70a84a114422e7"} Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.377858 4893 scope.go:117] "RemoveContainer" containerID="11cbbe9184808e4165fbf602f2c8f8fbd46df3fd353cffbc6f3e9d77f0b9adfb" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.377990 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.381729 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"935219ee-1e14-4570-a0ab-a6794677e9d4","Type":"ContainerStarted","Data":"246221e069c604b64cc9de35334cd3f73c6873cde69d5ef1324f0fc65ce79d2e"} Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.412931 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.412827413 podStartE2EDuration="2.412827413s" podCreationTimestamp="2026-01-28 15:33:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:33:03.400961781 +0000 UTC m=+1901.174576809" watchObservedRunningTime="2026-01-28 15:33:03.412827413 +0000 UTC m=+1901.186442441" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.465710 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.484620 4893 scope.go:117] "RemoveContainer" containerID="a9c0f3889b2c6c23d376a482e84f7f3b72063811564e6734fcccc67dbc125805" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.493268 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.509411 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.529544 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:33:03 crc kubenswrapper[4893]: E0128 15:33:03.530092 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e23591d-6753-4d56-b350-d1a802713c45" containerName="nova-kuttl-api-log" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.530117 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e23591d-6753-4d56-b350-d1a802713c45" containerName="nova-kuttl-api-log" Jan 28 15:33:03 crc kubenswrapper[4893]: E0128 15:33:03.530131 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052e4427-b04a-4a64-80de-5186db93716f" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.530141 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="052e4427-b04a-4a64-80de-5186db93716f" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:33:03 crc kubenswrapper[4893]: E0128 15:33:03.530153 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e23591d-6753-4d56-b350-d1a802713c45" containerName="nova-kuttl-api-api" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.530164 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e23591d-6753-4d56-b350-d1a802713c45" containerName="nova-kuttl-api-api" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.530359 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="052e4427-b04a-4a64-80de-5186db93716f" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.530378 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e23591d-6753-4d56-b350-d1a802713c45" containerName="nova-kuttl-api-log" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.530395 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e23591d-6753-4d56-b350-d1a802713c45" containerName="nova-kuttl-api-api" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.531633 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.536870 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.546564 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.581074 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwswj\" (UniqueName: \"kubernetes.io/projected/052e4427-b04a-4a64-80de-5186db93716f-kube-api-access-kwswj\") pod \"052e4427-b04a-4a64-80de-5186db93716f\" (UID: \"052e4427-b04a-4a64-80de-5186db93716f\") " Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.581255 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/052e4427-b04a-4a64-80de-5186db93716f-config-data\") pod \"052e4427-b04a-4a64-80de-5186db93716f\" (UID: \"052e4427-b04a-4a64-80de-5186db93716f\") " Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.587014 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052e4427-b04a-4a64-80de-5186db93716f-kube-api-access-kwswj" (OuterVolumeSpecName: "kube-api-access-kwswj") pod "052e4427-b04a-4a64-80de-5186db93716f" (UID: "052e4427-b04a-4a64-80de-5186db93716f"). InnerVolumeSpecName "kube-api-access-kwswj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.614908 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/052e4427-b04a-4a64-80de-5186db93716f-config-data" (OuterVolumeSpecName: "config-data") pod "052e4427-b04a-4a64-80de-5186db93716f" (UID: "052e4427-b04a-4a64-80de-5186db93716f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.682699 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-858bx\" (UniqueName: \"kubernetes.io/projected/e11e2d51-2fbd-4d10-ae52-02b058487b75-kube-api-access-858bx\") pod \"nova-kuttl-api-0\" (UID: \"e11e2d51-2fbd-4d10-ae52-02b058487b75\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.682790 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e11e2d51-2fbd-4d10-ae52-02b058487b75-logs\") pod \"nova-kuttl-api-0\" (UID: \"e11e2d51-2fbd-4d10-ae52-02b058487b75\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.682837 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11e2d51-2fbd-4d10-ae52-02b058487b75-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e11e2d51-2fbd-4d10-ae52-02b058487b75\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.682998 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/052e4427-b04a-4a64-80de-5186db93716f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.683013 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwswj\" (UniqueName: \"kubernetes.io/projected/052e4427-b04a-4a64-80de-5186db93716f-kube-api-access-kwswj\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.784626 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-858bx\" (UniqueName: \"kubernetes.io/projected/e11e2d51-2fbd-4d10-ae52-02b058487b75-kube-api-access-858bx\") pod \"nova-kuttl-api-0\" (UID: \"e11e2d51-2fbd-4d10-ae52-02b058487b75\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.784761 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e11e2d51-2fbd-4d10-ae52-02b058487b75-logs\") pod \"nova-kuttl-api-0\" (UID: \"e11e2d51-2fbd-4d10-ae52-02b058487b75\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.784792 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11e2d51-2fbd-4d10-ae52-02b058487b75-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e11e2d51-2fbd-4d10-ae52-02b058487b75\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.785381 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e11e2d51-2fbd-4d10-ae52-02b058487b75-logs\") pod \"nova-kuttl-api-0\" (UID: \"e11e2d51-2fbd-4d10-ae52-02b058487b75\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.789909 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11e2d51-2fbd-4d10-ae52-02b058487b75-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e11e2d51-2fbd-4d10-ae52-02b058487b75\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.805080 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-858bx\" (UniqueName: \"kubernetes.io/projected/e11e2d51-2fbd-4d10-ae52-02b058487b75-kube-api-access-858bx\") pod \"nova-kuttl-api-0\" (UID: \"e11e2d51-2fbd-4d10-ae52-02b058487b75\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:03 crc kubenswrapper[4893]: I0128 15:33:03.870763 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.361133 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.366628 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:33:04 crc kubenswrapper[4893]: W0128 15:33:04.370502 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode11e2d51_2fbd_4d10_ae52_02b058487b75.slice/crio-46b372dde85c40d1110f266a588fb8339ef8ed527629ccce4053329173f74434 WatchSource:0}: Error finding container 46b372dde85c40d1110f266a588fb8339ef8ed527629ccce4053329173f74434: Status 404 returned error can't find the container with id 46b372dde85c40d1110f266a588fb8339ef8ed527629ccce4053329173f74434 Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.407733 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e11e2d51-2fbd-4d10-ae52-02b058487b75","Type":"ContainerStarted","Data":"46b372dde85c40d1110f266a588fb8339ef8ed527629ccce4053329173f74434"} Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.415385 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.415395 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"052e4427-b04a-4a64-80de-5186db93716f","Type":"ContainerDied","Data":"6e3f37652ed4531378f4df6f6a20d47acbc1f4712b4a665aa9b11a07f317b831"} Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.415496 4893 scope.go:117] "RemoveContainer" containerID="043197cad6de7a3f7f019720e2ceab1229aac5d150e1a6cf5e7519001a2b0e32" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.450923 4893 generic.go:334] "Generic (PLEG): container finished" podID="06f31fa8-9788-45e6-b347-f7f697e29075" containerID="9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7" exitCode=0 Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.451025 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.451078 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"06f31fa8-9788-45e6-b347-f7f697e29075","Type":"ContainerDied","Data":"9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7"} Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.451130 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"06f31fa8-9788-45e6-b347-f7f697e29075","Type":"ContainerDied","Data":"7b050c9d41cf3db349975e9fef0d0ac4d527437d735a20abf6d65141d3a3c669"} Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.451937 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.465692 4893 scope.go:117] "RemoveContainer" containerID="9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.471590 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.477329 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 15:33:04 crc kubenswrapper[4893]: E0128 15:33:04.477896 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06f31fa8-9788-45e6-b347-f7f697e29075" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.478027 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="06f31fa8-9788-45e6-b347-f7f697e29075" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.478248 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="06f31fa8-9788-45e6-b347-f7f697e29075" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.478899 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.483822 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.485284 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-compute-fake1-compute-config-data" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.498770 4893 scope.go:117] "RemoveContainer" containerID="9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7" Jan 28 15:33:04 crc kubenswrapper[4893]: E0128 15:33:04.501691 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7\": container with ID starting with 9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7 not found: ID does not exist" containerID="9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.501736 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7"} err="failed to get container status \"9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7\": rpc error: code = NotFound desc = could not find container \"9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7\": container with ID starting with 9f6fe59fa2590dc52da3ade690aa6ff7e2362cfd3db84961ffd6e9c2ab385ec7 not found: ID does not exist" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.511839 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f31fa8-9788-45e6-b347-f7f697e29075-config-data\") pod \"06f31fa8-9788-45e6-b347-f7f697e29075\" (UID: \"06f31fa8-9788-45e6-b347-f7f697e29075\") " Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.511966 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vknx\" (UniqueName: \"kubernetes.io/projected/06f31fa8-9788-45e6-b347-f7f697e29075-kube-api-access-2vknx\") pod \"06f31fa8-9788-45e6-b347-f7f697e29075\" (UID: \"06f31fa8-9788-45e6-b347-f7f697e29075\") " Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.516956 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06f31fa8-9788-45e6-b347-f7f697e29075-kube-api-access-2vknx" (OuterVolumeSpecName: "kube-api-access-2vknx") pod "06f31fa8-9788-45e6-b347-f7f697e29075" (UID: "06f31fa8-9788-45e6-b347-f7f697e29075"). InnerVolumeSpecName "kube-api-access-2vknx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.534787 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06f31fa8-9788-45e6-b347-f7f697e29075-config-data" (OuterVolumeSpecName: "config-data") pod "06f31fa8-9788-45e6-b347-f7f697e29075" (UID: "06f31fa8-9788-45e6-b347-f7f697e29075"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.614000 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ptv2\" (UniqueName: \"kubernetes.io/projected/0fdb187d-14cc-4e15-b604-c1f913305e00-kube-api-access-6ptv2\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"0fdb187d-14cc-4e15-b604-c1f913305e00\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.614157 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"0fdb187d-14cc-4e15-b604-c1f913305e00\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.614261 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06f31fa8-9788-45e6-b347-f7f697e29075-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.614274 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vknx\" (UniqueName: \"kubernetes.io/projected/06f31fa8-9788-45e6-b347-f7f697e29075-kube-api-access-2vknx\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.716164 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ptv2\" (UniqueName: \"kubernetes.io/projected/0fdb187d-14cc-4e15-b604-c1f913305e00-kube-api-access-6ptv2\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"0fdb187d-14cc-4e15-b604-c1f913305e00\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.716615 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"0fdb187d-14cc-4e15-b604-c1f913305e00\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.721245 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"0fdb187d-14cc-4e15-b604-c1f913305e00\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.736580 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ptv2\" (UniqueName: \"kubernetes.io/projected/0fdb187d-14cc-4e15-b604-c1f913305e00-kube-api-access-6ptv2\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"0fdb187d-14cc-4e15-b604-c1f913305e00\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.846751 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.861314 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.870162 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.873218 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.875623 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.878755 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.902043 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.906005 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="052e4427-b04a-4a64-80de-5186db93716f" path="/var/lib/kubelet/pods/052e4427-b04a-4a64-80de-5186db93716f/volumes" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.906634 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06f31fa8-9788-45e6-b347-f7f697e29075" path="/var/lib/kubelet/pods/06f31fa8-9788-45e6-b347-f7f697e29075/volumes" Jan 28 15:33:04 crc kubenswrapper[4893]: I0128 15:33:04.907242 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e23591d-6753-4d56-b350-d1a802713c45" path="/var/lib/kubelet/pods/9e23591d-6753-4d56-b350-d1a802713c45/volumes" Jan 28 15:33:05 crc kubenswrapper[4893]: I0128 15:33:05.022330 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/760bef6d-d498-4149-a14b-24eb8ef48adb-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"760bef6d-d498-4149-a14b-24eb8ef48adb\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:33:05 crc kubenswrapper[4893]: I0128 15:33:05.026251 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz97p\" (UniqueName: \"kubernetes.io/projected/760bef6d-d498-4149-a14b-24eb8ef48adb-kube-api-access-hz97p\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"760bef6d-d498-4149-a14b-24eb8ef48adb\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:33:05 crc kubenswrapper[4893]: I0128 15:33:05.127913 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/760bef6d-d498-4149-a14b-24eb8ef48adb-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"760bef6d-d498-4149-a14b-24eb8ef48adb\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:33:05 crc kubenswrapper[4893]: I0128 15:33:05.127988 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz97p\" (UniqueName: \"kubernetes.io/projected/760bef6d-d498-4149-a14b-24eb8ef48adb-kube-api-access-hz97p\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"760bef6d-d498-4149-a14b-24eb8ef48adb\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:33:05 crc kubenswrapper[4893]: I0128 15:33:05.147781 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz97p\" (UniqueName: \"kubernetes.io/projected/760bef6d-d498-4149-a14b-24eb8ef48adb-kube-api-access-hz97p\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"760bef6d-d498-4149-a14b-24eb8ef48adb\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:33:05 crc kubenswrapper[4893]: I0128 15:33:05.148671 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/760bef6d-d498-4149-a14b-24eb8ef48adb-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"760bef6d-d498-4149-a14b-24eb8ef48adb\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:33:05 crc kubenswrapper[4893]: I0128 15:33:05.217304 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:33:05 crc kubenswrapper[4893]: I0128 15:33:05.375530 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 15:33:05 crc kubenswrapper[4893]: W0128 15:33:05.383865 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fdb187d_14cc_4e15_b604_c1f913305e00.slice/crio-3bdd204cdda95954e4937cac9acb50102c710afa535de078c0d3c6539019954a WatchSource:0}: Error finding container 3bdd204cdda95954e4937cac9acb50102c710afa535de078c0d3c6539019954a: Status 404 returned error can't find the container with id 3bdd204cdda95954e4937cac9acb50102c710afa535de078c0d3c6539019954a Jan 28 15:33:05 crc kubenswrapper[4893]: I0128 15:33:05.461735 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"0fdb187d-14cc-4e15-b604-c1f913305e00","Type":"ContainerStarted","Data":"3bdd204cdda95954e4937cac9acb50102c710afa535de078c0d3c6539019954a"} Jan 28 15:33:05 crc kubenswrapper[4893]: I0128 15:33:05.464791 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e11e2d51-2fbd-4d10-ae52-02b058487b75","Type":"ContainerStarted","Data":"2cc67505210be846c59250687f1ff459618dabb658c99ad0dcffc5edc84c0f51"} Jan 28 15:33:05 crc kubenswrapper[4893]: I0128 15:33:05.464836 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e11e2d51-2fbd-4d10-ae52-02b058487b75","Type":"ContainerStarted","Data":"ee5cfb041b2c58910fd37e448860e037901cf845792767efcb22f21602239cef"} Jan 28 15:33:05 crc kubenswrapper[4893]: I0128 15:33:05.497659 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.497620583 podStartE2EDuration="2.497620583s" podCreationTimestamp="2026-01-28 15:33:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:33:05.488367632 +0000 UTC m=+1903.261982680" watchObservedRunningTime="2026-01-28 15:33:05.497620583 +0000 UTC m=+1903.271235611" Jan 28 15:33:05 crc kubenswrapper[4893]: I0128 15:33:05.666905 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:33:05 crc kubenswrapper[4893]: W0128 15:33:05.671593 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod760bef6d_d498_4149_a14b_24eb8ef48adb.slice/crio-13aa5418960979415100a33c151216e25c27b1d456956f99261bb154e8288df8 WatchSource:0}: Error finding container 13aa5418960979415100a33c151216e25c27b1d456956f99261bb154e8288df8: Status 404 returned error can't find the container with id 13aa5418960979415100a33c151216e25c27b1d456956f99261bb154e8288df8 Jan 28 15:33:06 crc kubenswrapper[4893]: I0128 15:33:06.481932 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"0fdb187d-14cc-4e15-b604-c1f913305e00","Type":"ContainerStarted","Data":"9f9fa8127f9b22ebfebbfa3cf21c15dc8649fc1525a0019ca7c7796587866b6f"} Jan 28 15:33:06 crc kubenswrapper[4893]: I0128 15:33:06.482452 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:06 crc kubenswrapper[4893]: I0128 15:33:06.485820 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"760bef6d-d498-4149-a14b-24eb8ef48adb","Type":"ContainerStarted","Data":"f0c8fb170bf273de07f4e2a7eab34ab7dece7ab5b157c40b6bc37baa91850c67"} Jan 28 15:33:06 crc kubenswrapper[4893]: I0128 15:33:06.485903 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"760bef6d-d498-4149-a14b-24eb8ef48adb","Type":"ContainerStarted","Data":"13aa5418960979415100a33c151216e25c27b1d456956f99261bb154e8288df8"} Jan 28 15:33:06 crc kubenswrapper[4893]: I0128 15:33:06.508726 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podStartSLOduration=2.508707633 podStartE2EDuration="2.508707633s" podCreationTimestamp="2026-01-28 15:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:33:06.50052376 +0000 UTC m=+1904.274138808" watchObservedRunningTime="2026-01-28 15:33:06.508707633 +0000 UTC m=+1904.282322661" Jan 28 15:33:06 crc kubenswrapper[4893]: I0128 15:33:06.517281 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:06 crc kubenswrapper[4893]: I0128 15:33:06.525508 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.525464008 podStartE2EDuration="2.525464008s" podCreationTimestamp="2026-01-28 15:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:33:06.523733471 +0000 UTC m=+1904.297348509" watchObservedRunningTime="2026-01-28 15:33:06.525464008 +0000 UTC m=+1904.299079036" Jan 28 15:33:06 crc kubenswrapper[4893]: I0128 15:33:06.766513 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.145929 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.281841 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66cfadb-4c7f-455c-8625-9ae9c7d0d32d-config-data\") pod \"f66cfadb-4c7f-455c-8625-9ae9c7d0d32d\" (UID: \"f66cfadb-4c7f-455c-8625-9ae9c7d0d32d\") " Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.282055 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wkh9\" (UniqueName: \"kubernetes.io/projected/f66cfadb-4c7f-455c-8625-9ae9c7d0d32d-kube-api-access-4wkh9\") pod \"f66cfadb-4c7f-455c-8625-9ae9c7d0d32d\" (UID: \"f66cfadb-4c7f-455c-8625-9ae9c7d0d32d\") " Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.287725 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f66cfadb-4c7f-455c-8625-9ae9c7d0d32d-kube-api-access-4wkh9" (OuterVolumeSpecName: "kube-api-access-4wkh9") pod "f66cfadb-4c7f-455c-8625-9ae9c7d0d32d" (UID: "f66cfadb-4c7f-455c-8625-9ae9c7d0d32d"). InnerVolumeSpecName "kube-api-access-4wkh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.306029 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f66cfadb-4c7f-455c-8625-9ae9c7d0d32d-config-data" (OuterVolumeSpecName: "config-data") pod "f66cfadb-4c7f-455c-8625-9ae9c7d0d32d" (UID: "f66cfadb-4c7f-455c-8625-9ae9c7d0d32d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.384871 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66cfadb-4c7f-455c-8625-9ae9c7d0d32d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.384943 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wkh9\" (UniqueName: \"kubernetes.io/projected/f66cfadb-4c7f-455c-8625-9ae9c7d0d32d-kube-api-access-4wkh9\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.498009 4893 generic.go:334] "Generic (PLEG): container finished" podID="f66cfadb-4c7f-455c-8625-9ae9c7d0d32d" containerID="54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877" exitCode=0 Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.498155 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.498166 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"f66cfadb-4c7f-455c-8625-9ae9c7d0d32d","Type":"ContainerDied","Data":"54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877"} Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.498314 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"f66cfadb-4c7f-455c-8625-9ae9c7d0d32d","Type":"ContainerDied","Data":"6e2ec1ce1f4e365d30e3aeb2df546bf784846e6148d79638f8d6bffceedd8828"} Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.498351 4893 scope.go:117] "RemoveContainer" containerID="54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.498816 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.526517 4893 scope.go:117] "RemoveContainer" containerID="54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877" Jan 28 15:33:07 crc kubenswrapper[4893]: E0128 15:33:07.527877 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877\": container with ID starting with 54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877 not found: ID does not exist" containerID="54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.527950 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877"} err="failed to get container status \"54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877\": rpc error: code = NotFound desc = could not find container \"54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877\": container with ID starting with 54bfefe74d27d707588e6efb940fec547ed36c29f213a0e1eb232b23f60fd877 not found: ID does not exist" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.552195 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.559640 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.571576 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:33:07 crc kubenswrapper[4893]: E0128 15:33:07.572040 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f66cfadb-4c7f-455c-8625-9ae9c7d0d32d" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.572062 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f66cfadb-4c7f-455c-8625-9ae9c7d0d32d" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.572234 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f66cfadb-4c7f-455c-8625-9ae9c7d0d32d" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.572924 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.575586 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.578481 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.693672 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccc3820-5948-4de1-8ee7-8064fb59a528-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"5ccc3820-5948-4de1-8ee7-8064fb59a528\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.693833 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzjdh\" (UniqueName: \"kubernetes.io/projected/5ccc3820-5948-4de1-8ee7-8064fb59a528-kube-api-access-hzjdh\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"5ccc3820-5948-4de1-8ee7-8064fb59a528\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.795566 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccc3820-5948-4de1-8ee7-8064fb59a528-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"5ccc3820-5948-4de1-8ee7-8064fb59a528\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.796134 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzjdh\" (UniqueName: \"kubernetes.io/projected/5ccc3820-5948-4de1-8ee7-8064fb59a528-kube-api-access-hzjdh\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"5ccc3820-5948-4de1-8ee7-8064fb59a528\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.810561 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccc3820-5948-4de1-8ee7-8064fb59a528-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"5ccc3820-5948-4de1-8ee7-8064fb59a528\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.819736 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzjdh\" (UniqueName: \"kubernetes.io/projected/5ccc3820-5948-4de1-8ee7-8064fb59a528-kube-api-access-hzjdh\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"5ccc3820-5948-4de1-8ee7-8064fb59a528\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:33:07 crc kubenswrapper[4893]: I0128 15:33:07.907358 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:33:08 crc kubenswrapper[4893]: I0128 15:33:08.367618 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:33:08 crc kubenswrapper[4893]: I0128 15:33:08.510967 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"5ccc3820-5948-4de1-8ee7-8064fb59a528","Type":"ContainerStarted","Data":"dce0383b6a080af19fac81bf3cdc775f34ba0813065452e25c8c15ffb1005385"} Jan 28 15:33:08 crc kubenswrapper[4893]: I0128 15:33:08.901625 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f66cfadb-4c7f-455c-8625-9ae9c7d0d32d" path="/var/lib/kubelet/pods/f66cfadb-4c7f-455c-8625-9ae9c7d0d32d/volumes" Jan 28 15:33:09 crc kubenswrapper[4893]: I0128 15:33:09.523712 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:33:09 crc kubenswrapper[4893]: I0128 15:33:09.524428 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"5ccc3820-5948-4de1-8ee7-8064fb59a528","Type":"ContainerStarted","Data":"a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1"} Jan 28 15:33:09 crc kubenswrapper[4893]: I0128 15:33:09.545134 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.545093545 podStartE2EDuration="2.545093545s" podCreationTimestamp="2026-01-28 15:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:33:09.542118515 +0000 UTC m=+1907.315733563" watchObservedRunningTime="2026-01-28 15:33:09.545093545 +0000 UTC m=+1907.318708573" Jan 28 15:33:10 crc kubenswrapper[4893]: I0128 15:33:10.247441 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:33:10 crc kubenswrapper[4893]: I0128 15:33:10.892813 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:33:10 crc kubenswrapper[4893]: E0128 15:33:10.893082 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:33:11 crc kubenswrapper[4893]: I0128 15:33:11.760640 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:11 crc kubenswrapper[4893]: I0128 15:33:11.784169 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:12 crc kubenswrapper[4893]: I0128 15:33:12.551740 4893 generic.go:334] "Generic (PLEG): container finished" podID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerID="9f9fa8127f9b22ebfebbfa3cf21c15dc8649fc1525a0019ca7c7796587866b6f" exitCode=0 Jan 28 15:33:12 crc kubenswrapper[4893]: I0128 15:33:12.551829 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"0fdb187d-14cc-4e15-b604-c1f913305e00","Type":"ContainerDied","Data":"9f9fa8127f9b22ebfebbfa3cf21c15dc8649fc1525a0019ca7c7796587866b6f"} Jan 28 15:33:12 crc kubenswrapper[4893]: I0128 15:33:12.552744 4893 scope.go:117] "RemoveContainer" containerID="9f9fa8127f9b22ebfebbfa3cf21c15dc8649fc1525a0019ca7c7796587866b6f" Jan 28 15:33:12 crc kubenswrapper[4893]: I0128 15:33:12.585817 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:13 crc kubenswrapper[4893]: I0128 15:33:13.562830 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"0fdb187d-14cc-4e15-b604-c1f913305e00","Type":"ContainerStarted","Data":"1c0fba30eaf353185dc124d17a6c7a39650234d5dd472d3efdb30b10bb6e1a85"} Jan 28 15:33:13 crc kubenswrapper[4893]: I0128 15:33:13.563951 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:13 crc kubenswrapper[4893]: I0128 15:33:13.592188 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:13 crc kubenswrapper[4893]: I0128 15:33:13.872285 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:13 crc kubenswrapper[4893]: I0128 15:33:13.872359 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:14 crc kubenswrapper[4893]: I0128 15:33:14.953808 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e11e2d51-2fbd-4d10-ae52-02b058487b75" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:33:14 crc kubenswrapper[4893]: I0128 15:33:14.953939 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e11e2d51-2fbd-4d10-ae52-02b058487b75" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:33:17 crc kubenswrapper[4893]: I0128 15:33:17.674012 4893 generic.go:334] "Generic (PLEG): container finished" podID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerID="1c0fba30eaf353185dc124d17a6c7a39650234d5dd472d3efdb30b10bb6e1a85" exitCode=0 Jan 28 15:33:17 crc kubenswrapper[4893]: I0128 15:33:17.674110 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"0fdb187d-14cc-4e15-b604-c1f913305e00","Type":"ContainerDied","Data":"1c0fba30eaf353185dc124d17a6c7a39650234d5dd472d3efdb30b10bb6e1a85"} Jan 28 15:33:17 crc kubenswrapper[4893]: I0128 15:33:17.674683 4893 scope.go:117] "RemoveContainer" containerID="9f9fa8127f9b22ebfebbfa3cf21c15dc8649fc1525a0019ca7c7796587866b6f" Jan 28 15:33:17 crc kubenswrapper[4893]: I0128 15:33:17.675579 4893 scope.go:117] "RemoveContainer" containerID="1c0fba30eaf353185dc124d17a6c7a39650234d5dd472d3efdb30b10bb6e1a85" Jan 28 15:33:17 crc kubenswrapper[4893]: E0128 15:33:17.675922 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-kuttl-cell1-compute-fake1-compute-compute\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-kuttl-cell1-compute-fake1-compute-compute pod=nova-kuttl-cell1-compute-fake1-compute-0_nova-kuttl-default(0fdb187d-14cc-4e15-b604-c1f913305e00)\"" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" Jan 28 15:33:17 crc kubenswrapper[4893]: I0128 15:33:17.936112 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:33:19 crc kubenswrapper[4893]: I0128 15:33:19.903076 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:19 crc kubenswrapper[4893]: I0128 15:33:19.903880 4893 scope.go:117] "RemoveContainer" containerID="1c0fba30eaf353185dc124d17a6c7a39650234d5dd472d3efdb30b10bb6e1a85" Jan 28 15:33:19 crc kubenswrapper[4893]: E0128 15:33:19.904219 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-kuttl-cell1-compute-fake1-compute-compute\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-kuttl-cell1-compute-fake1-compute-compute pod=nova-kuttl-cell1-compute-fake1-compute-0_nova-kuttl-default(0fdb187d-14cc-4e15-b604-c1f913305e00)\"" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" Jan 28 15:33:21 crc kubenswrapper[4893]: I0128 15:33:21.893524 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:33:21 crc kubenswrapper[4893]: E0128 15:33:21.895349 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:33:23 crc kubenswrapper[4893]: I0128 15:33:23.875970 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:23 crc kubenswrapper[4893]: I0128 15:33:23.876564 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:23 crc kubenswrapper[4893]: I0128 15:33:23.876935 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:23 crc kubenswrapper[4893]: I0128 15:33:23.882879 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:24 crc kubenswrapper[4893]: I0128 15:33:24.742020 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:24 crc kubenswrapper[4893]: I0128 15:33:24.745410 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:24 crc kubenswrapper[4893]: I0128 15:33:24.902621 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:24 crc kubenswrapper[4893]: I0128 15:33:24.904017 4893 scope.go:117] "RemoveContainer" containerID="1c0fba30eaf353185dc124d17a6c7a39650234d5dd472d3efdb30b10bb6e1a85" Jan 28 15:33:24 crc kubenswrapper[4893]: E0128 15:33:24.904440 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-kuttl-cell1-compute-fake1-compute-compute\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-kuttl-cell1-compute-fake1-compute-compute pod=nova-kuttl-cell1-compute-fake1-compute-0_nova-kuttl-default(0fdb187d-14cc-4e15-b604-c1f913305e00)\"" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" Jan 28 15:33:32 crc kubenswrapper[4893]: I0128 15:33:32.897373 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:33:32 crc kubenswrapper[4893]: E0128 15:33:32.898170 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:33:39 crc kubenswrapper[4893]: I0128 15:33:39.892400 4893 scope.go:117] "RemoveContainer" containerID="1c0fba30eaf353185dc124d17a6c7a39650234d5dd472d3efdb30b10bb6e1a85" Jan 28 15:33:40 crc kubenswrapper[4893]: I0128 15:33:40.872793 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"0fdb187d-14cc-4e15-b604-c1f913305e00","Type":"ContainerStarted","Data":"19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8"} Jan 28 15:33:40 crc kubenswrapper[4893]: I0128 15:33:40.873817 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:40 crc kubenswrapper[4893]: I0128 15:33:40.910165 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.080225 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.090229 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.100195 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-shbq5"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.113940 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-cbmbs"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.125642 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.136200 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-w6p52"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.148742 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.269306 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapi3b1a-account-delete-f2vj9"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.270455 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi3b1a-account-delete-f2vj9" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.282283 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi3b1a-account-delete-f2vj9"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.365678 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.365952 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="8c5a4d01-0aec-4669-9f2f-20654ea7b9ce" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://a97d631d7a902e3eaf7932b633a9bd9f79294ad9076f749475d2fa6316079a63" gracePeriod=30 Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.382229 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.382467 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="935219ee-1e14-4570-a0ab-a6794677e9d4" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://246221e069c604b64cc9de35334cd3f73c6873cde69d5ef1324f0fc65ce79d2e" gracePeriod=30 Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.399661 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell0558f-account-delete-hbnml"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.401034 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0558f-account-delete-hbnml" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.415075 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0558f-account-delete-hbnml"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.416870 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b29563-a0ed-4c26-8843-8bc9ef408fe6-operator-scripts\") pod \"novaapi3b1a-account-delete-f2vj9\" (UID: \"e8b29563-a0ed-4c26-8843-8bc9ef408fe6\") " pod="nova-kuttl-default/novaapi3b1a-account-delete-f2vj9" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.416910 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjlvf\" (UniqueName: \"kubernetes.io/projected/e8b29563-a0ed-4c26-8843-8bc9ef408fe6-kube-api-access-xjlvf\") pod \"novaapi3b1a-account-delete-f2vj9\" (UID: \"e8b29563-a0ed-4c26-8843-8bc9ef408fe6\") " pod="nova-kuttl-default/novaapi3b1a-account-delete-f2vj9" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.526700 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b50d2cb9-d0f6-4af9-9865-cf7e57b46436-operator-scripts\") pod \"novacell0558f-account-delete-hbnml\" (UID: \"b50d2cb9-d0f6-4af9-9865-cf7e57b46436\") " pod="nova-kuttl-default/novacell0558f-account-delete-hbnml" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.526766 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b29563-a0ed-4c26-8843-8bc9ef408fe6-operator-scripts\") pod \"novaapi3b1a-account-delete-f2vj9\" (UID: \"e8b29563-a0ed-4c26-8843-8bc9ef408fe6\") " pod="nova-kuttl-default/novaapi3b1a-account-delete-f2vj9" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.526787 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjlvf\" (UniqueName: \"kubernetes.io/projected/e8b29563-a0ed-4c26-8843-8bc9ef408fe6-kube-api-access-xjlvf\") pod \"novaapi3b1a-account-delete-f2vj9\" (UID: \"e8b29563-a0ed-4c26-8843-8bc9ef408fe6\") " pod="nova-kuttl-default/novaapi3b1a-account-delete-f2vj9" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.526861 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftdg9\" (UniqueName: \"kubernetes.io/projected/b50d2cb9-d0f6-4af9-9865-cf7e57b46436-kube-api-access-ftdg9\") pod \"novacell0558f-account-delete-hbnml\" (UID: \"b50d2cb9-d0f6-4af9-9865-cf7e57b46436\") " pod="nova-kuttl-default/novacell0558f-account-delete-hbnml" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.527610 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b29563-a0ed-4c26-8843-8bc9ef408fe6-operator-scripts\") pod \"novaapi3b1a-account-delete-f2vj9\" (UID: \"e8b29563-a0ed-4c26-8843-8bc9ef408fe6\") " pod="nova-kuttl-default/novaapi3b1a-account-delete-f2vj9" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.528028 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell182e7-account-delete-97wdc"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.529176 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell182e7-account-delete-97wdc" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.542289 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.542793 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="284958c1-ea60-44c0-8868-f881dd64f745" containerName="nova-kuttl-metadata-log" containerID="cri-o://5046c02f842d9c5a353eba7ca7e95946986aaff797c1c0eb3b4f884d08f9341a" gracePeriod=30 Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.543033 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="284958c1-ea60-44c0-8868-f881dd64f745" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://51ef8d4ae1d8eaa85ec5c2b99665d02c22f674d6ec204f75181977478742c68c" gracePeriod=30 Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.555536 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.555808 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="5ccc3820-5948-4de1-8ee7-8064fb59a528" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1" gracePeriod=30 Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.570801 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell182e7-account-delete-97wdc"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.571851 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjlvf\" (UniqueName: \"kubernetes.io/projected/e8b29563-a0ed-4c26-8843-8bc9ef408fe6-kube-api-access-xjlvf\") pod \"novaapi3b1a-account-delete-f2vj9\" (UID: \"e8b29563-a0ed-4c26-8843-8bc9ef408fe6\") " pod="nova-kuttl-default/novaapi3b1a-account-delete-f2vj9" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.598884 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi3b1a-account-delete-f2vj9" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.602179 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.615026 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-p4zn5"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.630394 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxjcl\" (UniqueName: \"kubernetes.io/projected/f57634a9-80e0-4aa7-8ad5-444e48265e5f-kube-api-access-bxjcl\") pod \"novacell182e7-account-delete-97wdc\" (UID: \"f57634a9-80e0-4aa7-8ad5-444e48265e5f\") " pod="nova-kuttl-default/novacell182e7-account-delete-97wdc" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.630491 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftdg9\" (UniqueName: \"kubernetes.io/projected/b50d2cb9-d0f6-4af9-9865-cf7e57b46436-kube-api-access-ftdg9\") pod \"novacell0558f-account-delete-hbnml\" (UID: \"b50d2cb9-d0f6-4af9-9865-cf7e57b46436\") " pod="nova-kuttl-default/novacell0558f-account-delete-hbnml" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.630582 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b50d2cb9-d0f6-4af9-9865-cf7e57b46436-operator-scripts\") pod \"novacell0558f-account-delete-hbnml\" (UID: \"b50d2cb9-d0f6-4af9-9865-cf7e57b46436\") " pod="nova-kuttl-default/novacell0558f-account-delete-hbnml" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.630606 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f57634a9-80e0-4aa7-8ad5-444e48265e5f-operator-scripts\") pod \"novacell182e7-account-delete-97wdc\" (UID: \"f57634a9-80e0-4aa7-8ad5-444e48265e5f\") " pod="nova-kuttl-default/novacell182e7-account-delete-97wdc" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.632257 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b50d2cb9-d0f6-4af9-9865-cf7e57b46436-operator-scripts\") pod \"novacell0558f-account-delete-hbnml\" (UID: \"b50d2cb9-d0f6-4af9-9865-cf7e57b46436\") " pod="nova-kuttl-default/novacell0558f-account-delete-hbnml" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.649809 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.650046 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="760bef6d-d498-4149-a14b-24eb8ef48adb" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://f0c8fb170bf273de07f4e2a7eab34ab7dece7ab5b157c40b6bc37baa91850c67" gracePeriod=30 Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.659561 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.666165 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftdg9\" (UniqueName: \"kubernetes.io/projected/b50d2cb9-d0f6-4af9-9865-cf7e57b46436-kube-api-access-ftdg9\") pod \"novacell0558f-account-delete-hbnml\" (UID: \"b50d2cb9-d0f6-4af9-9865-cf7e57b46436\") " pod="nova-kuttl-default/novacell0558f-account-delete-hbnml" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.671324 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-n8hj8"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.673995 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.674281 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e11e2d51-2fbd-4d10-ae52-02b058487b75" containerName="nova-kuttl-api-log" containerID="cri-o://ee5cfb041b2c58910fd37e448860e037901cf845792767efcb22f21602239cef" gracePeriod=30 Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.674624 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e11e2d51-2fbd-4d10-ae52-02b058487b75" containerName="nova-kuttl-api-api" containerID="cri-o://2cc67505210be846c59250687f1ff459618dabb658c99ad0dcffc5edc84c0f51" gracePeriod=30 Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.729009 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0558f-account-delete-hbnml" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.738909 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f57634a9-80e0-4aa7-8ad5-444e48265e5f-operator-scripts\") pod \"novacell182e7-account-delete-97wdc\" (UID: \"f57634a9-80e0-4aa7-8ad5-444e48265e5f\") " pod="nova-kuttl-default/novacell182e7-account-delete-97wdc" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.739004 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxjcl\" (UniqueName: \"kubernetes.io/projected/f57634a9-80e0-4aa7-8ad5-444e48265e5f-kube-api-access-bxjcl\") pod \"novacell182e7-account-delete-97wdc\" (UID: \"f57634a9-80e0-4aa7-8ad5-444e48265e5f\") " pod="nova-kuttl-default/novacell182e7-account-delete-97wdc" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.740458 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f57634a9-80e0-4aa7-8ad5-444e48265e5f-operator-scripts\") pod \"novacell182e7-account-delete-97wdc\" (UID: \"f57634a9-80e0-4aa7-8ad5-444e48265e5f\") " pod="nova-kuttl-default/novacell182e7-account-delete-97wdc" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.768939 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxjcl\" (UniqueName: \"kubernetes.io/projected/f57634a9-80e0-4aa7-8ad5-444e48265e5f-kube-api-access-bxjcl\") pod \"novacell182e7-account-delete-97wdc\" (UID: \"f57634a9-80e0-4aa7-8ad5-444e48265e5f\") " pod="nova-kuttl-default/novacell182e7-account-delete-97wdc" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.871325 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell182e7-account-delete-97wdc" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.909766 4893 generic.go:334] "Generic (PLEG): container finished" podID="284958c1-ea60-44c0-8868-f881dd64f745" containerID="5046c02f842d9c5a353eba7ca7e95946986aaff797c1c0eb3b4f884d08f9341a" exitCode=143 Jan 28 15:33:42 crc kubenswrapper[4893]: E0128 15:33:42.910637 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.934057 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b6fcfff-1e42-4851-a4c9-7a55f8c02a33" path="/var/lib/kubelet/pods/2b6fcfff-1e42-4851-a4c9-7a55f8c02a33/volumes" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.934807 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="669f54db-d6f6-4319-998c-171b213d69d9" path="/var/lib/kubelet/pods/669f54db-d6f6-4319-998c-171b213d69d9/volumes" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.935294 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb8455a-7cd7-42d6-b9a2-c99841ba7f03" path="/var/lib/kubelet/pods/9eb8455a-7cd7-42d6-b9a2-c99841ba7f03/volumes" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.937922 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2e8bf8c-3035-4698-bf63-c309167ce05a" path="/var/lib/kubelet/pods/b2e8bf8c-3035-4698-bf63-c309167ce05a/volumes" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.938607 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c08e48e9-6e2b-4473-9f58-2184de7e8fc8" path="/var/lib/kubelet/pods/c08e48e9-6e2b-4473-9f58-2184de7e8fc8/volumes" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.939210 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"284958c1-ea60-44c0-8868-f881dd64f745","Type":"ContainerDied","Data":"5046c02f842d9c5a353eba7ca7e95946986aaff797c1c0eb3b4f884d08f9341a"} Jan 28 15:33:42 crc kubenswrapper[4893]: E0128 15:33:42.941428 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.950302 4893 generic.go:334] "Generic (PLEG): container finished" podID="e11e2d51-2fbd-4d10-ae52-02b058487b75" containerID="ee5cfb041b2c58910fd37e448860e037901cf845792767efcb22f21602239cef" exitCode=143 Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.950920 4893 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" secret="" err="secret \"nova-nova-kuttl-dockercfg-9wfv9\" not found" Jan 28 15:33:42 crc kubenswrapper[4893]: I0128 15:33:42.951407 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e11e2d51-2fbd-4d10-ae52-02b058487b75","Type":"ContainerDied","Data":"ee5cfb041b2c58910fd37e448860e037901cf845792767efcb22f21602239cef"} Jan 28 15:33:42 crc kubenswrapper[4893]: E0128 15:33:42.971652 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 15:33:42 crc kubenswrapper[4893]: E0128 15:33:42.971740 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="5ccc3820-5948-4de1-8ee7-8064fb59a528" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:33:43 crc kubenswrapper[4893]: E0128 15:33:43.044667 4893 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 15:33:43 crc kubenswrapper[4893]: E0128 15:33:43.044941 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data podName:0fdb187d-14cc-4e15-b604-c1f913305e00 nodeName:}" failed. No retries permitted until 2026-01-28 15:33:43.544921031 +0000 UTC m=+1941.318536059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "0fdb187d-14cc-4e15-b604-c1f913305e00") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.270945 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell182e7-account-delete-97wdc"] Jan 28 15:33:43 crc kubenswrapper[4893]: W0128 15:33:43.281706 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf57634a9_80e0_4aa7_8ad5_444e48265e5f.slice/crio-119d7db0375b920608fb818ccf5e739ef426c77bcd11effcf9d18a9d5b859a8f WatchSource:0}: Error finding container 119d7db0375b920608fb818ccf5e739ef426c77bcd11effcf9d18a9d5b859a8f: Status 404 returned error can't find the container with id 119d7db0375b920608fb818ccf5e739ef426c77bcd11effcf9d18a9d5b859a8f Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.296800 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi3b1a-account-delete-f2vj9"] Jan 28 15:33:43 crc kubenswrapper[4893]: W0128 15:33:43.298634 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8b29563_a0ed_4c26_8843_8bc9ef408fe6.slice/crio-60760266254d0329dfb35c93cdf389536c7a909c485436186f34e1845920685e WatchSource:0}: Error finding container 60760266254d0329dfb35c93cdf389536c7a909c485436186f34e1845920685e: Status 404 returned error can't find the container with id 60760266254d0329dfb35c93cdf389536c7a909c485436186f34e1845920685e Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.382114 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0558f-account-delete-hbnml"] Jan 28 15:33:43 crc kubenswrapper[4893]: E0128 15:33:43.554111 4893 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 15:33:43 crc kubenswrapper[4893]: E0128 15:33:43.554695 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data podName:0fdb187d-14cc-4e15-b604-c1f913305e00 nodeName:}" failed. No retries permitted until 2026-01-28 15:33:44.554670815 +0000 UTC m=+1942.328285843 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "0fdb187d-14cc-4e15-b604-c1f913305e00") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.842731 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.959256 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5a4d01-0aec-4669-9f2f-20654ea7b9ce-config-data\") pod \"8c5a4d01-0aec-4669-9f2f-20654ea7b9ce\" (UID: \"8c5a4d01-0aec-4669-9f2f-20654ea7b9ce\") " Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.959427 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thqg9\" (UniqueName: \"kubernetes.io/projected/8c5a4d01-0aec-4669-9f2f-20654ea7b9ce-kube-api-access-thqg9\") pod \"8c5a4d01-0aec-4669-9f2f-20654ea7b9ce\" (UID: \"8c5a4d01-0aec-4669-9f2f-20654ea7b9ce\") " Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.961444 4893 generic.go:334] "Generic (PLEG): container finished" podID="f57634a9-80e0-4aa7-8ad5-444e48265e5f" containerID="415718ed6f6e6a41fa5c59c692e62f51cdce66287c25516a340a7a2505f0225d" exitCode=0 Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.961670 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell182e7-account-delete-97wdc" event={"ID":"f57634a9-80e0-4aa7-8ad5-444e48265e5f","Type":"ContainerDied","Data":"415718ed6f6e6a41fa5c59c692e62f51cdce66287c25516a340a7a2505f0225d"} Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.961835 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell182e7-account-delete-97wdc" event={"ID":"f57634a9-80e0-4aa7-8ad5-444e48265e5f","Type":"ContainerStarted","Data":"119d7db0375b920608fb818ccf5e739ef426c77bcd11effcf9d18a9d5b859a8f"} Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.968539 4893 generic.go:334] "Generic (PLEG): container finished" podID="b50d2cb9-d0f6-4af9-9865-cf7e57b46436" containerID="5a287f5a2552088d8db9daae8af3d05ed2055d0302e5a00b0a88cb34d8341fec" exitCode=0 Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.968650 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0558f-account-delete-hbnml" event={"ID":"b50d2cb9-d0f6-4af9-9865-cf7e57b46436","Type":"ContainerDied","Data":"5a287f5a2552088d8db9daae8af3d05ed2055d0302e5a00b0a88cb34d8341fec"} Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.968689 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0558f-account-delete-hbnml" event={"ID":"b50d2cb9-d0f6-4af9-9865-cf7e57b46436","Type":"ContainerStarted","Data":"10e1076651589d36420dfe27e68ac8a545a7f5ccd8df813e480568d7753bc67d"} Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.968596 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c5a4d01-0aec-4669-9f2f-20654ea7b9ce-kube-api-access-thqg9" (OuterVolumeSpecName: "kube-api-access-thqg9") pod "8c5a4d01-0aec-4669-9f2f-20654ea7b9ce" (UID: "8c5a4d01-0aec-4669-9f2f-20654ea7b9ce"). InnerVolumeSpecName "kube-api-access-thqg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.992879 4893 generic.go:334] "Generic (PLEG): container finished" podID="e8b29563-a0ed-4c26-8843-8bc9ef408fe6" containerID="9d0b87a181bb5d077f8e57d5ea94aeff4b083d323dbd0fcaa51ee125283292b1" exitCode=0 Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.992937 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi3b1a-account-delete-f2vj9" event={"ID":"e8b29563-a0ed-4c26-8843-8bc9ef408fe6","Type":"ContainerDied","Data":"9d0b87a181bb5d077f8e57d5ea94aeff4b083d323dbd0fcaa51ee125283292b1"} Jan 28 15:33:43 crc kubenswrapper[4893]: I0128 15:33:43.992979 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi3b1a-account-delete-f2vj9" event={"ID":"e8b29563-a0ed-4c26-8843-8bc9ef408fe6","Type":"ContainerStarted","Data":"60760266254d0329dfb35c93cdf389536c7a909c485436186f34e1845920685e"} Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.003246 4893 generic.go:334] "Generic (PLEG): container finished" podID="8c5a4d01-0aec-4669-9f2f-20654ea7b9ce" containerID="a97d631d7a902e3eaf7932b633a9bd9f79294ad9076f749475d2fa6316079a63" exitCode=0 Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.003502 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" containerID="cri-o://19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" gracePeriod=30 Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.003800 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.004162 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"8c5a4d01-0aec-4669-9f2f-20654ea7b9ce","Type":"ContainerDied","Data":"a97d631d7a902e3eaf7932b633a9bd9f79294ad9076f749475d2fa6316079a63"} Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.004202 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"8c5a4d01-0aec-4669-9f2f-20654ea7b9ce","Type":"ContainerDied","Data":"56968aec4d64b3941afabc505148b6a190041ab5d8d20d144ecc4e8f522e27ab"} Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.004223 4893 scope.go:117] "RemoveContainer" containerID="a97d631d7a902e3eaf7932b633a9bd9f79294ad9076f749475d2fa6316079a63" Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.061025 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thqg9\" (UniqueName: \"kubernetes.io/projected/8c5a4d01-0aec-4669-9f2f-20654ea7b9ce-kube-api-access-thqg9\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.074877 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5a4d01-0aec-4669-9f2f-20654ea7b9ce-config-data" (OuterVolumeSpecName: "config-data") pod "8c5a4d01-0aec-4669-9f2f-20654ea7b9ce" (UID: "8c5a4d01-0aec-4669-9f2f-20654ea7b9ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:33:44 crc kubenswrapper[4893]: E0128 15:33:44.086890 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb50d2cb9_d0f6_4af9_9865_cf7e57b46436.slice/crio-conmon-5a287f5a2552088d8db9daae8af3d05ed2055d0302e5a00b0a88cb34d8341fec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb50d2cb9_d0f6_4af9_9865_cf7e57b46436.slice/crio-5a287f5a2552088d8db9daae8af3d05ed2055d0302e5a00b0a88cb34d8341fec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8b29563_a0ed_4c26_8843_8bc9ef408fe6.slice/crio-9d0b87a181bb5d077f8e57d5ea94aeff4b083d323dbd0fcaa51ee125283292b1.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.163823 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c5a4d01-0aec-4669-9f2f-20654ea7b9ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.175001 4893 scope.go:117] "RemoveContainer" containerID="a97d631d7a902e3eaf7932b633a9bd9f79294ad9076f749475d2fa6316079a63" Jan 28 15:33:44 crc kubenswrapper[4893]: E0128 15:33:44.179146 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a97d631d7a902e3eaf7932b633a9bd9f79294ad9076f749475d2fa6316079a63\": container with ID starting with a97d631d7a902e3eaf7932b633a9bd9f79294ad9076f749475d2fa6316079a63 not found: ID does not exist" containerID="a97d631d7a902e3eaf7932b633a9bd9f79294ad9076f749475d2fa6316079a63" Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.179219 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a97d631d7a902e3eaf7932b633a9bd9f79294ad9076f749475d2fa6316079a63"} err="failed to get container status \"a97d631d7a902e3eaf7932b633a9bd9f79294ad9076f749475d2fa6316079a63\": rpc error: code = NotFound desc = could not find container \"a97d631d7a902e3eaf7932b633a9bd9f79294ad9076f749475d2fa6316079a63\": container with ID starting with a97d631d7a902e3eaf7932b633a9bd9f79294ad9076f749475d2fa6316079a63 not found: ID does not exist" Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.348375 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.357431 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.360590 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.467681 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccc3820-5948-4de1-8ee7-8064fb59a528-config-data\") pod \"5ccc3820-5948-4de1-8ee7-8064fb59a528\" (UID: \"5ccc3820-5948-4de1-8ee7-8064fb59a528\") " Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.467750 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzjdh\" (UniqueName: \"kubernetes.io/projected/5ccc3820-5948-4de1-8ee7-8064fb59a528-kube-api-access-hzjdh\") pod \"5ccc3820-5948-4de1-8ee7-8064fb59a528\" (UID: \"5ccc3820-5948-4de1-8ee7-8064fb59a528\") " Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.477518 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ccc3820-5948-4de1-8ee7-8064fb59a528-kube-api-access-hzjdh" (OuterVolumeSpecName: "kube-api-access-hzjdh") pod "5ccc3820-5948-4de1-8ee7-8064fb59a528" (UID: "5ccc3820-5948-4de1-8ee7-8064fb59a528"). InnerVolumeSpecName "kube-api-access-hzjdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.498451 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ccc3820-5948-4de1-8ee7-8064fb59a528-config-data" (OuterVolumeSpecName: "config-data") pod "5ccc3820-5948-4de1-8ee7-8064fb59a528" (UID: "5ccc3820-5948-4de1-8ee7-8064fb59a528"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.569717 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ccc3820-5948-4de1-8ee7-8064fb59a528-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.570015 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzjdh\" (UniqueName: \"kubernetes.io/projected/5ccc3820-5948-4de1-8ee7-8064fb59a528-kube-api-access-hzjdh\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:44 crc kubenswrapper[4893]: E0128 15:33:44.569816 4893 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 15:33:44 crc kubenswrapper[4893]: E0128 15:33:44.570084 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data podName:0fdb187d-14cc-4e15-b604-c1f913305e00 nodeName:}" failed. No retries permitted until 2026-01-28 15:33:46.570065992 +0000 UTC m=+1944.343681020 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "0fdb187d-14cc-4e15-b604-c1f913305e00") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.896874 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.906258 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c5a4d01-0aec-4669-9f2f-20654ea7b9ce" path="/var/lib/kubelet/pods/8c5a4d01-0aec-4669-9f2f-20654ea7b9ce/volumes" Jan 28 15:33:44 crc kubenswrapper[4893]: E0128 15:33:44.913634 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:44 crc kubenswrapper[4893]: E0128 15:33:44.925771 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:44 crc kubenswrapper[4893]: E0128 15:33:44.938028 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:44 crc kubenswrapper[4893]: E0128 15:33:44.938108 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.979022 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz97p\" (UniqueName: \"kubernetes.io/projected/760bef6d-d498-4149-a14b-24eb8ef48adb-kube-api-access-hz97p\") pod \"760bef6d-d498-4149-a14b-24eb8ef48adb\" (UID: \"760bef6d-d498-4149-a14b-24eb8ef48adb\") " Jan 28 15:33:44 crc kubenswrapper[4893]: I0128 15:33:44.979379 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/760bef6d-d498-4149-a14b-24eb8ef48adb-config-data\") pod \"760bef6d-d498-4149-a14b-24eb8ef48adb\" (UID: \"760bef6d-d498-4149-a14b-24eb8ef48adb\") " Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.001087 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760bef6d-d498-4149-a14b-24eb8ef48adb-kube-api-access-hz97p" (OuterVolumeSpecName: "kube-api-access-hz97p") pod "760bef6d-d498-4149-a14b-24eb8ef48adb" (UID: "760bef6d-d498-4149-a14b-24eb8ef48adb"). InnerVolumeSpecName "kube-api-access-hz97p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.004647 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/760bef6d-d498-4149-a14b-24eb8ef48adb-config-data" (OuterVolumeSpecName: "config-data") pod "760bef6d-d498-4149-a14b-24eb8ef48adb" (UID: "760bef6d-d498-4149-a14b-24eb8ef48adb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.053949 4893 generic.go:334] "Generic (PLEG): container finished" podID="760bef6d-d498-4149-a14b-24eb8ef48adb" containerID="f0c8fb170bf273de07f4e2a7eab34ab7dece7ab5b157c40b6bc37baa91850c67" exitCode=0 Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.054078 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"760bef6d-d498-4149-a14b-24eb8ef48adb","Type":"ContainerDied","Data":"f0c8fb170bf273de07f4e2a7eab34ab7dece7ab5b157c40b6bc37baa91850c67"} Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.054114 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"760bef6d-d498-4149-a14b-24eb8ef48adb","Type":"ContainerDied","Data":"13aa5418960979415100a33c151216e25c27b1d456956f99261bb154e8288df8"} Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.054136 4893 scope.go:117] "RemoveContainer" containerID="f0c8fb170bf273de07f4e2a7eab34ab7dece7ab5b157c40b6bc37baa91850c67" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.054271 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.079091 4893 generic.go:334] "Generic (PLEG): container finished" podID="5ccc3820-5948-4de1-8ee7-8064fb59a528" containerID="a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1" exitCode=0 Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.079291 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.080397 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"5ccc3820-5948-4de1-8ee7-8064fb59a528","Type":"ContainerDied","Data":"a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1"} Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.080438 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"5ccc3820-5948-4de1-8ee7-8064fb59a528","Type":"ContainerDied","Data":"dce0383b6a080af19fac81bf3cdc775f34ba0813065452e25c8c15ffb1005385"} Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.091898 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/760bef6d-d498-4149-a14b-24eb8ef48adb-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.091958 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz97p\" (UniqueName: \"kubernetes.io/projected/760bef6d-d498-4149-a14b-24eb8ef48adb-kube-api-access-hz97p\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.129343 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.140919 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.150693 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.153654 4893 scope.go:117] "RemoveContainer" containerID="f0c8fb170bf273de07f4e2a7eab34ab7dece7ab5b157c40b6bc37baa91850c67" Jan 28 15:33:45 crc kubenswrapper[4893]: E0128 15:33:45.154624 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0c8fb170bf273de07f4e2a7eab34ab7dece7ab5b157c40b6bc37baa91850c67\": container with ID starting with f0c8fb170bf273de07f4e2a7eab34ab7dece7ab5b157c40b6bc37baa91850c67 not found: ID does not exist" containerID="f0c8fb170bf273de07f4e2a7eab34ab7dece7ab5b157c40b6bc37baa91850c67" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.154673 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0c8fb170bf273de07f4e2a7eab34ab7dece7ab5b157c40b6bc37baa91850c67"} err="failed to get container status \"f0c8fb170bf273de07f4e2a7eab34ab7dece7ab5b157c40b6bc37baa91850c67\": rpc error: code = NotFound desc = could not find container \"f0c8fb170bf273de07f4e2a7eab34ab7dece7ab5b157c40b6bc37baa91850c67\": container with ID starting with f0c8fb170bf273de07f4e2a7eab34ab7dece7ab5b157c40b6bc37baa91850c67 not found: ID does not exist" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.154700 4893 scope.go:117] "RemoveContainer" containerID="a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.162007 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.200740 4893 scope.go:117] "RemoveContainer" containerID="a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1" Jan 28 15:33:45 crc kubenswrapper[4893]: E0128 15:33:45.203190 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1\": container with ID starting with a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1 not found: ID does not exist" containerID="a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.203246 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1"} err="failed to get container status \"a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1\": rpc error: code = NotFound desc = could not find container \"a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1\": container with ID starting with a7c754abc827dd80cf41f2b9256080ccec6712108e9564d248e5c44f011a54d1 not found: ID does not exist" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.448389 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell182e7-account-delete-97wdc" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.522212 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0558f-account-delete-hbnml" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.554222 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi3b1a-account-delete-f2vj9" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.609632 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f57634a9-80e0-4aa7-8ad5-444e48265e5f-operator-scripts\") pod \"f57634a9-80e0-4aa7-8ad5-444e48265e5f\" (UID: \"f57634a9-80e0-4aa7-8ad5-444e48265e5f\") " Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.609718 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftdg9\" (UniqueName: \"kubernetes.io/projected/b50d2cb9-d0f6-4af9-9865-cf7e57b46436-kube-api-access-ftdg9\") pod \"b50d2cb9-d0f6-4af9-9865-cf7e57b46436\" (UID: \"b50d2cb9-d0f6-4af9-9865-cf7e57b46436\") " Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.609763 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b50d2cb9-d0f6-4af9-9865-cf7e57b46436-operator-scripts\") pod \"b50d2cb9-d0f6-4af9-9865-cf7e57b46436\" (UID: \"b50d2cb9-d0f6-4af9-9865-cf7e57b46436\") " Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.609792 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxjcl\" (UniqueName: \"kubernetes.io/projected/f57634a9-80e0-4aa7-8ad5-444e48265e5f-kube-api-access-bxjcl\") pod \"f57634a9-80e0-4aa7-8ad5-444e48265e5f\" (UID: \"f57634a9-80e0-4aa7-8ad5-444e48265e5f\") " Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.611634 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57634a9-80e0-4aa7-8ad5-444e48265e5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f57634a9-80e0-4aa7-8ad5-444e48265e5f" (UID: "f57634a9-80e0-4aa7-8ad5-444e48265e5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.612041 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b50d2cb9-d0f6-4af9-9865-cf7e57b46436-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b50d2cb9-d0f6-4af9-9865-cf7e57b46436" (UID: "b50d2cb9-d0f6-4af9-9865-cf7e57b46436"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.621202 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f57634a9-80e0-4aa7-8ad5-444e48265e5f-kube-api-access-bxjcl" (OuterVolumeSpecName: "kube-api-access-bxjcl") pod "f57634a9-80e0-4aa7-8ad5-444e48265e5f" (UID: "f57634a9-80e0-4aa7-8ad5-444e48265e5f"). InnerVolumeSpecName "kube-api-access-bxjcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.623550 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b50d2cb9-d0f6-4af9-9865-cf7e57b46436-kube-api-access-ftdg9" (OuterVolumeSpecName: "kube-api-access-ftdg9") pod "b50d2cb9-d0f6-4af9-9865-cf7e57b46436" (UID: "b50d2cb9-d0f6-4af9-9865-cf7e57b46436"). InnerVolumeSpecName "kube-api-access-ftdg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.711245 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b29563-a0ed-4c26-8843-8bc9ef408fe6-operator-scripts\") pod \"e8b29563-a0ed-4c26-8843-8bc9ef408fe6\" (UID: \"e8b29563-a0ed-4c26-8843-8bc9ef408fe6\") " Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.711440 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjlvf\" (UniqueName: \"kubernetes.io/projected/e8b29563-a0ed-4c26-8843-8bc9ef408fe6-kube-api-access-xjlvf\") pod \"e8b29563-a0ed-4c26-8843-8bc9ef408fe6\" (UID: \"e8b29563-a0ed-4c26-8843-8bc9ef408fe6\") " Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.711743 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8b29563-a0ed-4c26-8843-8bc9ef408fe6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8b29563-a0ed-4c26-8843-8bc9ef408fe6" (UID: "e8b29563-a0ed-4c26-8843-8bc9ef408fe6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.711779 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f57634a9-80e0-4aa7-8ad5-444e48265e5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.711794 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftdg9\" (UniqueName: \"kubernetes.io/projected/b50d2cb9-d0f6-4af9-9865-cf7e57b46436-kube-api-access-ftdg9\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.711806 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b50d2cb9-d0f6-4af9-9865-cf7e57b46436-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.711815 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxjcl\" (UniqueName: \"kubernetes.io/projected/f57634a9-80e0-4aa7-8ad5-444e48265e5f-kube-api-access-bxjcl\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.718800 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b29563-a0ed-4c26-8843-8bc9ef408fe6-kube-api-access-xjlvf" (OuterVolumeSpecName: "kube-api-access-xjlvf") pod "e8b29563-a0ed-4c26-8843-8bc9ef408fe6" (UID: "e8b29563-a0ed-4c26-8843-8bc9ef408fe6"). InnerVolumeSpecName "kube-api-access-xjlvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.813607 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjlvf\" (UniqueName: \"kubernetes.io/projected/e8b29563-a0ed-4c26-8843-8bc9ef408fe6-kube-api-access-xjlvf\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:45 crc kubenswrapper[4893]: I0128 15:33:45.813687 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b29563-a0ed-4c26-8843-8bc9ef408fe6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.007285 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="284958c1-ea60-44c0-8868-f881dd64f745" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.201:8775/\": read tcp 10.217.0.2:54764->10.217.0.201:8775: read: connection reset by peer" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.007361 4893 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="284958c1-ea60-44c0-8868-f881dd64f745" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.201:8775/\": read tcp 10.217.0.2:54780->10.217.0.201:8775: read: connection reset by peer" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.089267 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell182e7-account-delete-97wdc" event={"ID":"f57634a9-80e0-4aa7-8ad5-444e48265e5f","Type":"ContainerDied","Data":"119d7db0375b920608fb818ccf5e739ef426c77bcd11effcf9d18a9d5b859a8f"} Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.089319 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="119d7db0375b920608fb818ccf5e739ef426c77bcd11effcf9d18a9d5b859a8f" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.089387 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell182e7-account-delete-97wdc" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.092241 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0558f-account-delete-hbnml" event={"ID":"b50d2cb9-d0f6-4af9-9865-cf7e57b46436","Type":"ContainerDied","Data":"10e1076651589d36420dfe27e68ac8a545a7f5ccd8df813e480568d7753bc67d"} Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.092290 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10e1076651589d36420dfe27e68ac8a545a7f5ccd8df813e480568d7753bc67d" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.092365 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0558f-account-delete-hbnml" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.102654 4893 generic.go:334] "Generic (PLEG): container finished" podID="284958c1-ea60-44c0-8868-f881dd64f745" containerID="51ef8d4ae1d8eaa85ec5c2b99665d02c22f674d6ec204f75181977478742c68c" exitCode=0 Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.102721 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"284958c1-ea60-44c0-8868-f881dd64f745","Type":"ContainerDied","Data":"51ef8d4ae1d8eaa85ec5c2b99665d02c22f674d6ec204f75181977478742c68c"} Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.105098 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi3b1a-account-delete-f2vj9" event={"ID":"e8b29563-a0ed-4c26-8843-8bc9ef408fe6","Type":"ContainerDied","Data":"60760266254d0329dfb35c93cdf389536c7a909c485436186f34e1845920685e"} Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.105121 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60760266254d0329dfb35c93cdf389536c7a909c485436186f34e1845920685e" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.105169 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi3b1a-account-delete-f2vj9" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.112453 4893 generic.go:334] "Generic (PLEG): container finished" podID="e11e2d51-2fbd-4d10-ae52-02b058487b75" containerID="2cc67505210be846c59250687f1ff459618dabb658c99ad0dcffc5edc84c0f51" exitCode=0 Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.112528 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e11e2d51-2fbd-4d10-ae52-02b058487b75","Type":"ContainerDied","Data":"2cc67505210be846c59250687f1ff459618dabb658c99ad0dcffc5edc84c0f51"} Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.304971 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.406562 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.454243 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-858bx\" (UniqueName: \"kubernetes.io/projected/e11e2d51-2fbd-4d10-ae52-02b058487b75-kube-api-access-858bx\") pod \"e11e2d51-2fbd-4d10-ae52-02b058487b75\" (UID: \"e11e2d51-2fbd-4d10-ae52-02b058487b75\") " Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.454320 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11e2d51-2fbd-4d10-ae52-02b058487b75-config-data\") pod \"e11e2d51-2fbd-4d10-ae52-02b058487b75\" (UID: \"e11e2d51-2fbd-4d10-ae52-02b058487b75\") " Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.454352 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e11e2d51-2fbd-4d10-ae52-02b058487b75-logs\") pod \"e11e2d51-2fbd-4d10-ae52-02b058487b75\" (UID: \"e11e2d51-2fbd-4d10-ae52-02b058487b75\") " Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.455564 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e11e2d51-2fbd-4d10-ae52-02b058487b75-logs" (OuterVolumeSpecName: "logs") pod "e11e2d51-2fbd-4d10-ae52-02b058487b75" (UID: "e11e2d51-2fbd-4d10-ae52-02b058487b75"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.460684 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e11e2d51-2fbd-4d10-ae52-02b058487b75-kube-api-access-858bx" (OuterVolumeSpecName: "kube-api-access-858bx") pod "e11e2d51-2fbd-4d10-ae52-02b058487b75" (UID: "e11e2d51-2fbd-4d10-ae52-02b058487b75"). InnerVolumeSpecName "kube-api-access-858bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.489350 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e11e2d51-2fbd-4d10-ae52-02b058487b75-config-data" (OuterVolumeSpecName: "config-data") pod "e11e2d51-2fbd-4d10-ae52-02b058487b75" (UID: "e11e2d51-2fbd-4d10-ae52-02b058487b75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.556044 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/284958c1-ea60-44c0-8868-f881dd64f745-logs\") pod \"284958c1-ea60-44c0-8868-f881dd64f745\" (UID: \"284958c1-ea60-44c0-8868-f881dd64f745\") " Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.556605 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/284958c1-ea60-44c0-8868-f881dd64f745-logs" (OuterVolumeSpecName: "logs") pod "284958c1-ea60-44c0-8868-f881dd64f745" (UID: "284958c1-ea60-44c0-8868-f881dd64f745"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.556619 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j9wl\" (UniqueName: \"kubernetes.io/projected/284958c1-ea60-44c0-8868-f881dd64f745-kube-api-access-7j9wl\") pod \"284958c1-ea60-44c0-8868-f881dd64f745\" (UID: \"284958c1-ea60-44c0-8868-f881dd64f745\") " Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.556844 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284958c1-ea60-44c0-8868-f881dd64f745-config-data\") pod \"284958c1-ea60-44c0-8868-f881dd64f745\" (UID: \"284958c1-ea60-44c0-8868-f881dd64f745\") " Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.557528 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/284958c1-ea60-44c0-8868-f881dd64f745-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.557634 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-858bx\" (UniqueName: \"kubernetes.io/projected/e11e2d51-2fbd-4d10-ae52-02b058487b75-kube-api-access-858bx\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.557717 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e11e2d51-2fbd-4d10-ae52-02b058487b75-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.557783 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e11e2d51-2fbd-4d10-ae52-02b058487b75-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.560023 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284958c1-ea60-44c0-8868-f881dd64f745-kube-api-access-7j9wl" (OuterVolumeSpecName: "kube-api-access-7j9wl") pod "284958c1-ea60-44c0-8868-f881dd64f745" (UID: "284958c1-ea60-44c0-8868-f881dd64f745"). InnerVolumeSpecName "kube-api-access-7j9wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.606662 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284958c1-ea60-44c0-8868-f881dd64f745-config-data" (OuterVolumeSpecName: "config-data") pod "284958c1-ea60-44c0-8868-f881dd64f745" (UID: "284958c1-ea60-44c0-8868-f881dd64f745"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.659462 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j9wl\" (UniqueName: \"kubernetes.io/projected/284958c1-ea60-44c0-8868-f881dd64f745-kube-api-access-7j9wl\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.659518 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284958c1-ea60-44c0-8868-f881dd64f745-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:46 crc kubenswrapper[4893]: E0128 15:33:46.659697 4893 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 15:33:46 crc kubenswrapper[4893]: E0128 15:33:46.659814 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data podName:0fdb187d-14cc-4e15-b604-c1f913305e00 nodeName:}" failed. No retries permitted until 2026-01-28 15:33:50.659778026 +0000 UTC m=+1948.433393114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "0fdb187d-14cc-4e15-b604-c1f913305e00") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.697880 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.869252 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/935219ee-1e14-4570-a0ab-a6794677e9d4-config-data\") pod \"935219ee-1e14-4570-a0ab-a6794677e9d4\" (UID: \"935219ee-1e14-4570-a0ab-a6794677e9d4\") " Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.869486 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl8tf\" (UniqueName: \"kubernetes.io/projected/935219ee-1e14-4570-a0ab-a6794677e9d4-kube-api-access-nl8tf\") pod \"935219ee-1e14-4570-a0ab-a6794677e9d4\" (UID: \"935219ee-1e14-4570-a0ab-a6794677e9d4\") " Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.879745 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/935219ee-1e14-4570-a0ab-a6794677e9d4-kube-api-access-nl8tf" (OuterVolumeSpecName: "kube-api-access-nl8tf") pod "935219ee-1e14-4570-a0ab-a6794677e9d4" (UID: "935219ee-1e14-4570-a0ab-a6794677e9d4"). InnerVolumeSpecName "kube-api-access-nl8tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.892191 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:33:46 crc kubenswrapper[4893]: E0128 15:33:46.892640 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.893847 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/935219ee-1e14-4570-a0ab-a6794677e9d4-config-data" (OuterVolumeSpecName: "config-data") pod "935219ee-1e14-4570-a0ab-a6794677e9d4" (UID: "935219ee-1e14-4570-a0ab-a6794677e9d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.903339 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ccc3820-5948-4de1-8ee7-8064fb59a528" path="/var/lib/kubelet/pods/5ccc3820-5948-4de1-8ee7-8064fb59a528/volumes" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.904069 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="760bef6d-d498-4149-a14b-24eb8ef48adb" path="/var/lib/kubelet/pods/760bef6d-d498-4149-a14b-24eb8ef48adb/volumes" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.971624 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl8tf\" (UniqueName: \"kubernetes.io/projected/935219ee-1e14-4570-a0ab-a6794677e9d4-kube-api-access-nl8tf\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:46 crc kubenswrapper[4893]: I0128 15:33:46.971666 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/935219ee-1e14-4570-a0ab-a6794677e9d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.124147 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"284958c1-ea60-44c0-8868-f881dd64f745","Type":"ContainerDied","Data":"b8db4ab2aeb8486a6ef5730f3aa08b006c2d1590b5bfb678903141a477949810"} Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.124464 4893 scope.go:117] "RemoveContainer" containerID="51ef8d4ae1d8eaa85ec5c2b99665d02c22f674d6ec204f75181977478742c68c" Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.124176 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.127010 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e11e2d51-2fbd-4d10-ae52-02b058487b75","Type":"ContainerDied","Data":"46b372dde85c40d1110f266a588fb8339ef8ed527629ccce4053329173f74434"} Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.127049 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.129550 4893 generic.go:334] "Generic (PLEG): container finished" podID="935219ee-1e14-4570-a0ab-a6794677e9d4" containerID="246221e069c604b64cc9de35334cd3f73c6873cde69d5ef1324f0fc65ce79d2e" exitCode=0 Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.129605 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.129602 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"935219ee-1e14-4570-a0ab-a6794677e9d4","Type":"ContainerDied","Data":"246221e069c604b64cc9de35334cd3f73c6873cde69d5ef1324f0fc65ce79d2e"} Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.129810 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"935219ee-1e14-4570-a0ab-a6794677e9d4","Type":"ContainerDied","Data":"70b7886ea82db779aaf56f2546439325fa7957c4ba978728bc0a4aa2b7196468"} Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.155406 4893 scope.go:117] "RemoveContainer" containerID="5046c02f842d9c5a353eba7ca7e95946986aaff797c1c0eb3b4f884d08f9341a" Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.158708 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.171438 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.181920 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.196032 4893 scope.go:117] "RemoveContainer" containerID="2cc67505210be846c59250687f1ff459618dabb658c99ad0dcffc5edc84c0f51" Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.199068 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.208628 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.218179 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.218755 4893 scope.go:117] "RemoveContainer" containerID="ee5cfb041b2c58910fd37e448860e037901cf845792767efcb22f21602239cef" Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.241571 4893 scope.go:117] "RemoveContainer" containerID="246221e069c604b64cc9de35334cd3f73c6873cde69d5ef1324f0fc65ce79d2e" Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.265651 4893 scope.go:117] "RemoveContainer" containerID="246221e069c604b64cc9de35334cd3f73c6873cde69d5ef1324f0fc65ce79d2e" Jan 28 15:33:47 crc kubenswrapper[4893]: E0128 15:33:47.268092 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"246221e069c604b64cc9de35334cd3f73c6873cde69d5ef1324f0fc65ce79d2e\": container with ID starting with 246221e069c604b64cc9de35334cd3f73c6873cde69d5ef1324f0fc65ce79d2e not found: ID does not exist" containerID="246221e069c604b64cc9de35334cd3f73c6873cde69d5ef1324f0fc65ce79d2e" Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.268153 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"246221e069c604b64cc9de35334cd3f73c6873cde69d5ef1324f0fc65ce79d2e"} err="failed to get container status \"246221e069c604b64cc9de35334cd3f73c6873cde69d5ef1324f0fc65ce79d2e\": rpc error: code = NotFound desc = could not find container \"246221e069c604b64cc9de35334cd3f73c6873cde69d5ef1324f0fc65ce79d2e\": container with ID starting with 246221e069c604b64cc9de35334cd3f73c6873cde69d5ef1324f0fc65ce79d2e not found: ID does not exist" Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.304070 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-jk86f"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.311493 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-jk86f"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.322798 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapi3b1a-account-delete-f2vj9"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.329230 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.335617 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapi3b1a-account-delete-f2vj9"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.341995 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-3b1a-account-create-update-r22sm"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.445428 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-n4ks8"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.451293 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-n4ks8"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.461632 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.470769 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell0558f-account-delete-hbnml"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.477335 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-558f-account-create-update-hgksj"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.483322 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell0558f-account-delete-hbnml"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.552905 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-9sqgr"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.560889 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-9sqgr"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.577277 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell182e7-account-delete-97wdc"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.591674 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.599358 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell182e7-account-delete-97wdc"] Jan 28 15:33:47 crc kubenswrapper[4893]: I0128 15:33:47.646137 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-82e7-account-create-update-ls97d"] Jan 28 15:33:48 crc kubenswrapper[4893]: I0128 15:33:48.905266 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="284958c1-ea60-44c0-8868-f881dd64f745" path="/var/lib/kubelet/pods/284958c1-ea60-44c0-8868-f881dd64f745/volumes" Jan 28 15:33:48 crc kubenswrapper[4893]: I0128 15:33:48.907270 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a62af92-8f89-4800-9724-c651058a0cf2" path="/var/lib/kubelet/pods/3a62af92-8f89-4800-9724-c651058a0cf2/volumes" Jan 28 15:33:48 crc kubenswrapper[4893]: I0128 15:33:48.908457 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf82a69-6a29-4c98-8e72-1d4f4a73edda" path="/var/lib/kubelet/pods/3bf82a69-6a29-4c98-8e72-1d4f4a73edda/volumes" Jan 28 15:33:48 crc kubenswrapper[4893]: I0128 15:33:48.909564 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410a2c70-a715-4d47-a056-ff7d2ca6e79f" path="/var/lib/kubelet/pods/410a2c70-a715-4d47-a056-ff7d2ca6e79f/volumes" Jan 28 15:33:48 crc kubenswrapper[4893]: I0128 15:33:48.910171 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45994969-6957-49cd-95cc-3da11b3f8a53" path="/var/lib/kubelet/pods/45994969-6957-49cd-95cc-3da11b3f8a53/volumes" Jan 28 15:33:48 crc kubenswrapper[4893]: I0128 15:33:48.911032 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64828696-910a-4780-90f7-7022cb08c19f" path="/var/lib/kubelet/pods/64828696-910a-4780-90f7-7022cb08c19f/volumes" Jan 28 15:33:48 crc kubenswrapper[4893]: I0128 15:33:48.911821 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="935219ee-1e14-4570-a0ab-a6794677e9d4" path="/var/lib/kubelet/pods/935219ee-1e14-4570-a0ab-a6794677e9d4/volumes" Jan 28 15:33:48 crc kubenswrapper[4893]: I0128 15:33:48.913142 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99a38c89-ae5a-4e48-8816-423ce2312cc0" path="/var/lib/kubelet/pods/99a38c89-ae5a-4e48-8816-423ce2312cc0/volumes" Jan 28 15:33:48 crc kubenswrapper[4893]: I0128 15:33:48.913727 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b50d2cb9-d0f6-4af9-9865-cf7e57b46436" path="/var/lib/kubelet/pods/b50d2cb9-d0f6-4af9-9865-cf7e57b46436/volumes" Jan 28 15:33:48 crc kubenswrapper[4893]: I0128 15:33:48.914344 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e11e2d51-2fbd-4d10-ae52-02b058487b75" path="/var/lib/kubelet/pods/e11e2d51-2fbd-4d10-ae52-02b058487b75/volumes" Jan 28 15:33:48 crc kubenswrapper[4893]: I0128 15:33:48.915598 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8b29563-a0ed-4c26-8843-8bc9ef408fe6" path="/var/lib/kubelet/pods/e8b29563-a0ed-4c26-8843-8bc9ef408fe6/volumes" Jan 28 15:33:48 crc kubenswrapper[4893]: I0128 15:33:48.916180 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f57634a9-80e0-4aa7-8ad5-444e48265e5f" path="/var/lib/kubelet/pods/f57634a9-80e0-4aa7-8ad5-444e48265e5f/volumes" Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.905202 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.907499 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.909244 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.909301 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.959310 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-hc6tn"] Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.960284 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935219ee-1e14-4570-a0ab-a6794677e9d4" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.960353 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="935219ee-1e14-4570-a0ab-a6794677e9d4" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.960412 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284958c1-ea60-44c0-8868-f881dd64f745" containerName="nova-kuttl-metadata-log" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.960458 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="284958c1-ea60-44c0-8868-f881dd64f745" containerName="nova-kuttl-metadata-log" Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.960544 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11e2d51-2fbd-4d10-ae52-02b058487b75" containerName="nova-kuttl-api-log" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.960623 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11e2d51-2fbd-4d10-ae52-02b058487b75" containerName="nova-kuttl-api-log" Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.960708 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11e2d51-2fbd-4d10-ae52-02b058487b75" containerName="nova-kuttl-api-api" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.960761 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11e2d51-2fbd-4d10-ae52-02b058487b75" containerName="nova-kuttl-api-api" Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.960822 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ccc3820-5948-4de1-8ee7-8064fb59a528" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.960883 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ccc3820-5948-4de1-8ee7-8064fb59a528" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.960961 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b50d2cb9-d0f6-4af9-9865-cf7e57b46436" containerName="mariadb-account-delete" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.961060 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="b50d2cb9-d0f6-4af9-9865-cf7e57b46436" containerName="mariadb-account-delete" Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.961163 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b29563-a0ed-4c26-8843-8bc9ef408fe6" containerName="mariadb-account-delete" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.961231 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b29563-a0ed-4c26-8843-8bc9ef408fe6" containerName="mariadb-account-delete" Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.961292 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284958c1-ea60-44c0-8868-f881dd64f745" containerName="nova-kuttl-metadata-metadata" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.961342 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="284958c1-ea60-44c0-8868-f881dd64f745" containerName="nova-kuttl-metadata-metadata" Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.961395 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760bef6d-d498-4149-a14b-24eb8ef48adb" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.961462 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="760bef6d-d498-4149-a14b-24eb8ef48adb" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.961560 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c5a4d01-0aec-4669-9f2f-20654ea7b9ce" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.961618 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c5a4d01-0aec-4669-9f2f-20654ea7b9ce" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 15:33:49 crc kubenswrapper[4893]: E0128 15:33:49.961672 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57634a9-80e0-4aa7-8ad5-444e48265e5f" containerName="mariadb-account-delete" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.961721 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57634a9-80e0-4aa7-8ad5-444e48265e5f" containerName="mariadb-account-delete" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.961959 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="935219ee-1e14-4570-a0ab-a6794677e9d4" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.962030 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b29563-a0ed-4c26-8843-8bc9ef408fe6" containerName="mariadb-account-delete" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.962088 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="e11e2d51-2fbd-4d10-ae52-02b058487b75" containerName="nova-kuttl-api-api" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.962145 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ccc3820-5948-4de1-8ee7-8064fb59a528" containerName="nova-kuttl-cell1-conductor-conductor" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.962203 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f57634a9-80e0-4aa7-8ad5-444e48265e5f" containerName="mariadb-account-delete" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.962255 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="284958c1-ea60-44c0-8868-f881dd64f745" containerName="nova-kuttl-metadata-metadata" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.962303 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="b50d2cb9-d0f6-4af9-9865-cf7e57b46436" containerName="mariadb-account-delete" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.962361 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="760bef6d-d498-4149-a14b-24eb8ef48adb" containerName="nova-kuttl-cell0-conductor-conductor" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.962440 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="284958c1-ea60-44c0-8868-f881dd64f745" containerName="nova-kuttl-metadata-log" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.962513 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="e11e2d51-2fbd-4d10-ae52-02b058487b75" containerName="nova-kuttl-api-log" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.962567 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c5a4d01-0aec-4669-9f2f-20654ea7b9ce" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.963309 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hc6tn" Jan 28 15:33:49 crc kubenswrapper[4893]: I0128 15:33:49.972642 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hc6tn"] Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.033113 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-vxgfw"] Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.034119 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-vxgfw" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.039764 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk55b\" (UniqueName: \"kubernetes.io/projected/4bb0a658-a2dc-4442-a362-e1a6fd576848-kube-api-access-bk55b\") pod \"nova-api-db-create-hc6tn\" (UID: \"4bb0a658-a2dc-4442-a362-e1a6fd576848\") " pod="nova-kuttl-default/nova-api-db-create-hc6tn" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.042748 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bb0a658-a2dc-4442-a362-e1a6fd576848-operator-scripts\") pod \"nova-api-db-create-hc6tn\" (UID: \"4bb0a658-a2dc-4442-a362-e1a6fd576848\") " pod="nova-kuttl-default/nova-api-db-create-hc6tn" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.057422 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-vxgfw"] Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.144774 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bb0a658-a2dc-4442-a362-e1a6fd576848-operator-scripts\") pod \"nova-api-db-create-hc6tn\" (UID: \"4bb0a658-a2dc-4442-a362-e1a6fd576848\") " pod="nova-kuttl-default/nova-api-db-create-hc6tn" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.144835 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bf5b624-d148-4c17-8824-77512ecaadba-operator-scripts\") pod \"nova-cell0-db-create-vxgfw\" (UID: \"5bf5b624-d148-4c17-8824-77512ecaadba\") " pod="nova-kuttl-default/nova-cell0-db-create-vxgfw" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.144899 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qmr2\" (UniqueName: \"kubernetes.io/projected/5bf5b624-d148-4c17-8824-77512ecaadba-kube-api-access-4qmr2\") pod \"nova-cell0-db-create-vxgfw\" (UID: \"5bf5b624-d148-4c17-8824-77512ecaadba\") " pod="nova-kuttl-default/nova-cell0-db-create-vxgfw" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.145259 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk55b\" (UniqueName: \"kubernetes.io/projected/4bb0a658-a2dc-4442-a362-e1a6fd576848-kube-api-access-bk55b\") pod \"nova-api-db-create-hc6tn\" (UID: \"4bb0a658-a2dc-4442-a362-e1a6fd576848\") " pod="nova-kuttl-default/nova-api-db-create-hc6tn" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.145835 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bb0a658-a2dc-4442-a362-e1a6fd576848-operator-scripts\") pod \"nova-api-db-create-hc6tn\" (UID: \"4bb0a658-a2dc-4442-a362-e1a6fd576848\") " pod="nova-kuttl-default/nova-api-db-create-hc6tn" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.167701 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-a703-account-create-update-r88t7"] Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.169162 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-a703-account-create-update-r88t7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.175782 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.176516 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk55b\" (UniqueName: \"kubernetes.io/projected/4bb0a658-a2dc-4442-a362-e1a6fd576848-kube-api-access-bk55b\") pod \"nova-api-db-create-hc6tn\" (UID: \"4bb0a658-a2dc-4442-a362-e1a6fd576848\") " pod="nova-kuttl-default/nova-api-db-create-hc6tn" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.190290 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-a703-account-create-update-r88t7"] Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.249798 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b927e3-d3f5-4343-af70-bc2eb39a539c-operator-scripts\") pod \"nova-api-a703-account-create-update-r88t7\" (UID: \"25b927e3-d3f5-4343-af70-bc2eb39a539c\") " pod="nova-kuttl-default/nova-api-a703-account-create-update-r88t7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.249932 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdwx7\" (UniqueName: \"kubernetes.io/projected/25b927e3-d3f5-4343-af70-bc2eb39a539c-kube-api-access-xdwx7\") pod \"nova-api-a703-account-create-update-r88t7\" (UID: \"25b927e3-d3f5-4343-af70-bc2eb39a539c\") " pod="nova-kuttl-default/nova-api-a703-account-create-update-r88t7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.250014 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bf5b624-d148-4c17-8824-77512ecaadba-operator-scripts\") pod \"nova-cell0-db-create-vxgfw\" (UID: \"5bf5b624-d148-4c17-8824-77512ecaadba\") " pod="nova-kuttl-default/nova-cell0-db-create-vxgfw" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.250077 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qmr2\" (UniqueName: \"kubernetes.io/projected/5bf5b624-d148-4c17-8824-77512ecaadba-kube-api-access-4qmr2\") pod \"nova-cell0-db-create-vxgfw\" (UID: \"5bf5b624-d148-4c17-8824-77512ecaadba\") " pod="nova-kuttl-default/nova-cell0-db-create-vxgfw" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.254321 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bf5b624-d148-4c17-8824-77512ecaadba-operator-scripts\") pod \"nova-cell0-db-create-vxgfw\" (UID: \"5bf5b624-d148-4c17-8824-77512ecaadba\") " pod="nova-kuttl-default/nova-cell0-db-create-vxgfw" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.254382 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-fxmm7"] Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.255718 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-fxmm7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.271892 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-fxmm7"] Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.277257 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qmr2\" (UniqueName: \"kubernetes.io/projected/5bf5b624-d148-4c17-8824-77512ecaadba-kube-api-access-4qmr2\") pod \"nova-cell0-db-create-vxgfw\" (UID: \"5bf5b624-d148-4c17-8824-77512ecaadba\") " pod="nova-kuttl-default/nova-cell0-db-create-vxgfw" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.340959 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hc6tn" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.351579 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b927e3-d3f5-4343-af70-bc2eb39a539c-operator-scripts\") pod \"nova-api-a703-account-create-update-r88t7\" (UID: \"25b927e3-d3f5-4343-af70-bc2eb39a539c\") " pod="nova-kuttl-default/nova-api-a703-account-create-update-r88t7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.352217 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdwx7\" (UniqueName: \"kubernetes.io/projected/25b927e3-d3f5-4343-af70-bc2eb39a539c-kube-api-access-xdwx7\") pod \"nova-api-a703-account-create-update-r88t7\" (UID: \"25b927e3-d3f5-4343-af70-bc2eb39a539c\") " pod="nova-kuttl-default/nova-api-a703-account-create-update-r88t7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.352328 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d555b18f-0774-4e4c-9b9d-10ee1335d432-operator-scripts\") pod \"nova-cell1-db-create-fxmm7\" (UID: \"d555b18f-0774-4e4c-9b9d-10ee1335d432\") " pod="nova-kuttl-default/nova-cell1-db-create-fxmm7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.352390 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b927e3-d3f5-4343-af70-bc2eb39a539c-operator-scripts\") pod \"nova-api-a703-account-create-update-r88t7\" (UID: \"25b927e3-d3f5-4343-af70-bc2eb39a539c\") " pod="nova-kuttl-default/nova-api-a703-account-create-update-r88t7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.352407 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svg47\" (UniqueName: \"kubernetes.io/projected/d555b18f-0774-4e4c-9b9d-10ee1335d432-kube-api-access-svg47\") pod \"nova-cell1-db-create-fxmm7\" (UID: \"d555b18f-0774-4e4c-9b9d-10ee1335d432\") " pod="nova-kuttl-default/nova-cell1-db-create-fxmm7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.359213 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-vxgfw" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.363031 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx"] Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.364456 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.367775 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.374018 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdwx7\" (UniqueName: \"kubernetes.io/projected/25b927e3-d3f5-4343-af70-bc2eb39a539c-kube-api-access-xdwx7\") pod \"nova-api-a703-account-create-update-r88t7\" (UID: \"25b927e3-d3f5-4343-af70-bc2eb39a539c\") " pod="nova-kuttl-default/nova-api-a703-account-create-update-r88t7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.392486 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx"] Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.454373 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d555b18f-0774-4e4c-9b9d-10ee1335d432-operator-scripts\") pod \"nova-cell1-db-create-fxmm7\" (UID: \"d555b18f-0774-4e4c-9b9d-10ee1335d432\") " pod="nova-kuttl-default/nova-cell1-db-create-fxmm7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.454499 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj962\" (UniqueName: \"kubernetes.io/projected/51a939d5-f485-40b5-bc7b-05d3e063db83-kube-api-access-jj962\") pod \"nova-cell0-9d70-account-create-update-97bqx\" (UID: \"51a939d5-f485-40b5-bc7b-05d3e063db83\") " pod="nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.454607 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svg47\" (UniqueName: \"kubernetes.io/projected/d555b18f-0774-4e4c-9b9d-10ee1335d432-kube-api-access-svg47\") pod \"nova-cell1-db-create-fxmm7\" (UID: \"d555b18f-0774-4e4c-9b9d-10ee1335d432\") " pod="nova-kuttl-default/nova-cell1-db-create-fxmm7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.454647 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a939d5-f485-40b5-bc7b-05d3e063db83-operator-scripts\") pod \"nova-cell0-9d70-account-create-update-97bqx\" (UID: \"51a939d5-f485-40b5-bc7b-05d3e063db83\") " pod="nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.455211 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d555b18f-0774-4e4c-9b9d-10ee1335d432-operator-scripts\") pod \"nova-cell1-db-create-fxmm7\" (UID: \"d555b18f-0774-4e4c-9b9d-10ee1335d432\") " pod="nova-kuttl-default/nova-cell1-db-create-fxmm7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.479768 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svg47\" (UniqueName: \"kubernetes.io/projected/d555b18f-0774-4e4c-9b9d-10ee1335d432-kube-api-access-svg47\") pod \"nova-cell1-db-create-fxmm7\" (UID: \"d555b18f-0774-4e4c-9b9d-10ee1335d432\") " pod="nova-kuttl-default/nova-cell1-db-create-fxmm7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.537002 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-a703-account-create-update-r88t7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.553127 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx"] Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.554568 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.558620 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.561938 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj962\" (UniqueName: \"kubernetes.io/projected/51a939d5-f485-40b5-bc7b-05d3e063db83-kube-api-access-jj962\") pod \"nova-cell0-9d70-account-create-update-97bqx\" (UID: \"51a939d5-f485-40b5-bc7b-05d3e063db83\") " pod="nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.562045 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a939d5-f485-40b5-bc7b-05d3e063db83-operator-scripts\") pod \"nova-cell0-9d70-account-create-update-97bqx\" (UID: \"51a939d5-f485-40b5-bc7b-05d3e063db83\") " pod="nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.563088 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx"] Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.563503 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a939d5-f485-40b5-bc7b-05d3e063db83-operator-scripts\") pod \"nova-cell0-9d70-account-create-update-97bqx\" (UID: \"51a939d5-f485-40b5-bc7b-05d3e063db83\") " pod="nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.588814 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj962\" (UniqueName: \"kubernetes.io/projected/51a939d5-f485-40b5-bc7b-05d3e063db83-kube-api-access-jj962\") pod \"nova-cell0-9d70-account-create-update-97bqx\" (UID: \"51a939d5-f485-40b5-bc7b-05d3e063db83\") " pod="nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.625134 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-fxmm7" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.663270 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4549a6c6-f4a4-463a-8b6e-2a0d7edeae42-operator-scripts\") pod \"nova-cell1-2799-account-create-update-rspsx\" (UID: \"4549a6c6-f4a4-463a-8b6e-2a0d7edeae42\") " pod="nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.663529 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t2s7\" (UniqueName: \"kubernetes.io/projected/4549a6c6-f4a4-463a-8b6e-2a0d7edeae42-kube-api-access-2t2s7\") pod \"nova-cell1-2799-account-create-update-rspsx\" (UID: \"4549a6c6-f4a4-463a-8b6e-2a0d7edeae42\") " pod="nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx" Jan 28 15:33:50 crc kubenswrapper[4893]: E0128 15:33:50.663745 4893 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 15:33:50 crc kubenswrapper[4893]: E0128 15:33:50.663811 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data podName:0fdb187d-14cc-4e15-b604-c1f913305e00 nodeName:}" failed. No retries permitted until 2026-01-28 15:33:58.663790896 +0000 UTC m=+1956.437405924 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "0fdb187d-14cc-4e15-b604-c1f913305e00") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.764634 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4549a6c6-f4a4-463a-8b6e-2a0d7edeae42-operator-scripts\") pod \"nova-cell1-2799-account-create-update-rspsx\" (UID: \"4549a6c6-f4a4-463a-8b6e-2a0d7edeae42\") " pod="nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.764757 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t2s7\" (UniqueName: \"kubernetes.io/projected/4549a6c6-f4a4-463a-8b6e-2a0d7edeae42-kube-api-access-2t2s7\") pod \"nova-cell1-2799-account-create-update-rspsx\" (UID: \"4549a6c6-f4a4-463a-8b6e-2a0d7edeae42\") " pod="nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.765508 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4549a6c6-f4a4-463a-8b6e-2a0d7edeae42-operator-scripts\") pod \"nova-cell1-2799-account-create-update-rspsx\" (UID: \"4549a6c6-f4a4-463a-8b6e-2a0d7edeae42\") " pod="nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.788037 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t2s7\" (UniqueName: \"kubernetes.io/projected/4549a6c6-f4a4-463a-8b6e-2a0d7edeae42-kube-api-access-2t2s7\") pod \"nova-cell1-2799-account-create-update-rspsx\" (UID: \"4549a6c6-f4a4-463a-8b6e-2a0d7edeae42\") " pod="nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.795199 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.885186 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx" Jan 28 15:33:50 crc kubenswrapper[4893]: I0128 15:33:50.967548 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hc6tn"] Jan 28 15:33:51 crc kubenswrapper[4893]: I0128 15:33:51.059310 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-vxgfw"] Jan 28 15:33:51 crc kubenswrapper[4893]: W0128 15:33:51.070358 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bf5b624_d148_4c17_8824_77512ecaadba.slice/crio-009de230d09b6cdca74247b5b90b7bd6e4a8d94a0c1000091a4a95ff90f86a7c WatchSource:0}: Error finding container 009de230d09b6cdca74247b5b90b7bd6e4a8d94a0c1000091a4a95ff90f86a7c: Status 404 returned error can't find the container with id 009de230d09b6cdca74247b5b90b7bd6e4a8d94a0c1000091a4a95ff90f86a7c Jan 28 15:33:51 crc kubenswrapper[4893]: I0128 15:33:51.186456 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-fxmm7"] Jan 28 15:33:51 crc kubenswrapper[4893]: I0128 15:33:51.198715 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-vxgfw" event={"ID":"5bf5b624-d148-4c17-8824-77512ecaadba","Type":"ContainerStarted","Data":"009de230d09b6cdca74247b5b90b7bd6e4a8d94a0c1000091a4a95ff90f86a7c"} Jan 28 15:33:51 crc kubenswrapper[4893]: I0128 15:33:51.203022 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-hc6tn" event={"ID":"4bb0a658-a2dc-4442-a362-e1a6fd576848","Type":"ContainerStarted","Data":"6cef781434e99c9e8c62fda5ccb4b78aee1df550e2b5077264b378c04d68a3b8"} Jan 28 15:33:51 crc kubenswrapper[4893]: I0128 15:33:51.203076 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-hc6tn" event={"ID":"4bb0a658-a2dc-4442-a362-e1a6fd576848","Type":"ContainerStarted","Data":"fb2879e579e5a3c61d9c8fe6b942a1824f61f4b10a7485832bb0521670dece62"} Jan 28 15:33:51 crc kubenswrapper[4893]: I0128 15:33:51.221301 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-a703-account-create-update-r88t7"] Jan 28 15:33:51 crc kubenswrapper[4893]: I0128 15:33:51.233462 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-db-create-hc6tn" podStartSLOduration=2.233440268 podStartE2EDuration="2.233440268s" podCreationTimestamp="2026-01-28 15:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:33:51.220216279 +0000 UTC m=+1948.993831307" watchObservedRunningTime="2026-01-28 15:33:51.233440268 +0000 UTC m=+1949.007055296" Jan 28 15:33:51 crc kubenswrapper[4893]: I0128 15:33:51.380194 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx"] Jan 28 15:33:51 crc kubenswrapper[4893]: W0128 15:33:51.390249 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51a939d5_f485_40b5_bc7b_05d3e063db83.slice/crio-3043a1188217f9cfeb38e0b8f357c4a767ff994b608d17cec70a1dbba498e6f6 WatchSource:0}: Error finding container 3043a1188217f9cfeb38e0b8f357c4a767ff994b608d17cec70a1dbba498e6f6: Status 404 returned error can't find the container with id 3043a1188217f9cfeb38e0b8f357c4a767ff994b608d17cec70a1dbba498e6f6 Jan 28 15:33:51 crc kubenswrapper[4893]: I0128 15:33:51.549017 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx"] Jan 28 15:33:51 crc kubenswrapper[4893]: W0128 15:33:51.613542 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4549a6c6_f4a4_463a_8b6e_2a0d7edeae42.slice/crio-45c6515459ebdfe103f878e4040592e649f55231a1760f2853c37c6048d97387 WatchSource:0}: Error finding container 45c6515459ebdfe103f878e4040592e649f55231a1760f2853c37c6048d97387: Status 404 returned error can't find the container with id 45c6515459ebdfe103f878e4040592e649f55231a1760f2853c37c6048d97387 Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.222282 4893 generic.go:334] "Generic (PLEG): container finished" podID="4549a6c6-f4a4-463a-8b6e-2a0d7edeae42" containerID="e42174f11cbe4f4338d91cb5807ee52abdfb400cc21db55b31781e752240ccdf" exitCode=0 Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.222367 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx" event={"ID":"4549a6c6-f4a4-463a-8b6e-2a0d7edeae42","Type":"ContainerDied","Data":"e42174f11cbe4f4338d91cb5807ee52abdfb400cc21db55b31781e752240ccdf"} Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.222711 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx" event={"ID":"4549a6c6-f4a4-463a-8b6e-2a0d7edeae42","Type":"ContainerStarted","Data":"45c6515459ebdfe103f878e4040592e649f55231a1760f2853c37c6048d97387"} Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.224573 4893 generic.go:334] "Generic (PLEG): container finished" podID="4bb0a658-a2dc-4442-a362-e1a6fd576848" containerID="6cef781434e99c9e8c62fda5ccb4b78aee1df550e2b5077264b378c04d68a3b8" exitCode=0 Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.224652 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-hc6tn" event={"ID":"4bb0a658-a2dc-4442-a362-e1a6fd576848","Type":"ContainerDied","Data":"6cef781434e99c9e8c62fda5ccb4b78aee1df550e2b5077264b378c04d68a3b8"} Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.227172 4893 generic.go:334] "Generic (PLEG): container finished" podID="51a939d5-f485-40b5-bc7b-05d3e063db83" containerID="22916b06bb8bc7cd3caf7baf6c7a38757f439eae087bdf60044f4265c822d466" exitCode=0 Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.227249 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx" event={"ID":"51a939d5-f485-40b5-bc7b-05d3e063db83","Type":"ContainerDied","Data":"22916b06bb8bc7cd3caf7baf6c7a38757f439eae087bdf60044f4265c822d466"} Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.227280 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx" event={"ID":"51a939d5-f485-40b5-bc7b-05d3e063db83","Type":"ContainerStarted","Data":"3043a1188217f9cfeb38e0b8f357c4a767ff994b608d17cec70a1dbba498e6f6"} Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.228624 4893 generic.go:334] "Generic (PLEG): container finished" podID="25b927e3-d3f5-4343-af70-bc2eb39a539c" containerID="d4dc0c8918a1f57be549680e1e2559cc57b0bb1d562071ed9ec465286db525e3" exitCode=0 Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.228675 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-a703-account-create-update-r88t7" event={"ID":"25b927e3-d3f5-4343-af70-bc2eb39a539c","Type":"ContainerDied","Data":"d4dc0c8918a1f57be549680e1e2559cc57b0bb1d562071ed9ec465286db525e3"} Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.228691 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-a703-account-create-update-r88t7" event={"ID":"25b927e3-d3f5-4343-af70-bc2eb39a539c","Type":"ContainerStarted","Data":"b6f44ea91fb4df30cb0ec27151bc1659c1a6199afba8d9b0b315480a8721421e"} Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.230039 4893 generic.go:334] "Generic (PLEG): container finished" podID="5bf5b624-d148-4c17-8824-77512ecaadba" containerID="7f2401bf212a6af535113f18466517668c066e7944c8fe333b0d1a142cb2e55a" exitCode=0 Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.230092 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-vxgfw" event={"ID":"5bf5b624-d148-4c17-8824-77512ecaadba","Type":"ContainerDied","Data":"7f2401bf212a6af535113f18466517668c066e7944c8fe333b0d1a142cb2e55a"} Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.231398 4893 generic.go:334] "Generic (PLEG): container finished" podID="d555b18f-0774-4e4c-9b9d-10ee1335d432" containerID="72b971eb6981c8a776084ddebc87b507bcc6a2aecbba7c3b172051f37d37e6c8" exitCode=0 Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.231429 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-fxmm7" event={"ID":"d555b18f-0774-4e4c-9b9d-10ee1335d432","Type":"ContainerDied","Data":"72b971eb6981c8a776084ddebc87b507bcc6a2aecbba7c3b172051f37d37e6c8"} Jan 28 15:33:52 crc kubenswrapper[4893]: I0128 15:33:52.231449 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-fxmm7" event={"ID":"d555b18f-0774-4e4c-9b9d-10ee1335d432","Type":"ContainerStarted","Data":"218eecdff24de2cd67b48d9337b93cd2b92fc3063d1566ac4866104017fca2b4"} Jan 28 15:33:53 crc kubenswrapper[4893]: I0128 15:33:53.629049 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx" Jan 28 15:33:53 crc kubenswrapper[4893]: I0128 15:33:53.717379 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj962\" (UniqueName: \"kubernetes.io/projected/51a939d5-f485-40b5-bc7b-05d3e063db83-kube-api-access-jj962\") pod \"51a939d5-f485-40b5-bc7b-05d3e063db83\" (UID: \"51a939d5-f485-40b5-bc7b-05d3e063db83\") " Jan 28 15:33:53 crc kubenswrapper[4893]: I0128 15:33:53.717517 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a939d5-f485-40b5-bc7b-05d3e063db83-operator-scripts\") pod \"51a939d5-f485-40b5-bc7b-05d3e063db83\" (UID: \"51a939d5-f485-40b5-bc7b-05d3e063db83\") " Jan 28 15:33:53 crc kubenswrapper[4893]: I0128 15:33:53.718520 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51a939d5-f485-40b5-bc7b-05d3e063db83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51a939d5-f485-40b5-bc7b-05d3e063db83" (UID: "51a939d5-f485-40b5-bc7b-05d3e063db83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:33:53 crc kubenswrapper[4893]: I0128 15:33:53.724042 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51a939d5-f485-40b5-bc7b-05d3e063db83-kube-api-access-jj962" (OuterVolumeSpecName: "kube-api-access-jj962") pod "51a939d5-f485-40b5-bc7b-05d3e063db83" (UID: "51a939d5-f485-40b5-bc7b-05d3e063db83"). InnerVolumeSpecName "kube-api-access-jj962". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:53 crc kubenswrapper[4893]: I0128 15:33:53.819594 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj962\" (UniqueName: \"kubernetes.io/projected/51a939d5-f485-40b5-bc7b-05d3e063db83-kube-api-access-jj962\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:53 crc kubenswrapper[4893]: I0128 15:33:53.819623 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51a939d5-f485-40b5-bc7b-05d3e063db83-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:53 crc kubenswrapper[4893]: I0128 15:33:53.923895 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-a703-account-create-update-r88t7" Jan 28 15:33:53 crc kubenswrapper[4893]: I0128 15:33:53.930860 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-fxmm7" Jan 28 15:33:53 crc kubenswrapper[4893]: I0128 15:33:53.941263 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hc6tn" Jan 28 15:33:53 crc kubenswrapper[4893]: I0128 15:33:53.957160 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-vxgfw" Jan 28 15:33:53 crc kubenswrapper[4893]: I0128 15:33:53.969510 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.024857 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d555b18f-0774-4e4c-9b9d-10ee1335d432-operator-scripts\") pod \"d555b18f-0774-4e4c-9b9d-10ee1335d432\" (UID: \"d555b18f-0774-4e4c-9b9d-10ee1335d432\") " Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.024931 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk55b\" (UniqueName: \"kubernetes.io/projected/4bb0a658-a2dc-4442-a362-e1a6fd576848-kube-api-access-bk55b\") pod \"4bb0a658-a2dc-4442-a362-e1a6fd576848\" (UID: \"4bb0a658-a2dc-4442-a362-e1a6fd576848\") " Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.024993 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b927e3-d3f5-4343-af70-bc2eb39a539c-operator-scripts\") pod \"25b927e3-d3f5-4343-af70-bc2eb39a539c\" (UID: \"25b927e3-d3f5-4343-af70-bc2eb39a539c\") " Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.028075 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t2s7\" (UniqueName: \"kubernetes.io/projected/4549a6c6-f4a4-463a-8b6e-2a0d7edeae42-kube-api-access-2t2s7\") pod \"4549a6c6-f4a4-463a-8b6e-2a0d7edeae42\" (UID: \"4549a6c6-f4a4-463a-8b6e-2a0d7edeae42\") " Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.028135 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4549a6c6-f4a4-463a-8b6e-2a0d7edeae42-operator-scripts\") pod \"4549a6c6-f4a4-463a-8b6e-2a0d7edeae42\" (UID: \"4549a6c6-f4a4-463a-8b6e-2a0d7edeae42\") " Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.028197 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svg47\" (UniqueName: \"kubernetes.io/projected/d555b18f-0774-4e4c-9b9d-10ee1335d432-kube-api-access-svg47\") pod \"d555b18f-0774-4e4c-9b9d-10ee1335d432\" (UID: \"d555b18f-0774-4e4c-9b9d-10ee1335d432\") " Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.028446 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bb0a658-a2dc-4442-a362-e1a6fd576848-operator-scripts\") pod \"4bb0a658-a2dc-4442-a362-e1a6fd576848\" (UID: \"4bb0a658-a2dc-4442-a362-e1a6fd576848\") " Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.028532 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdwx7\" (UniqueName: \"kubernetes.io/projected/25b927e3-d3f5-4343-af70-bc2eb39a539c-kube-api-access-xdwx7\") pod \"25b927e3-d3f5-4343-af70-bc2eb39a539c\" (UID: \"25b927e3-d3f5-4343-af70-bc2eb39a539c\") " Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.028567 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bf5b624-d148-4c17-8824-77512ecaadba-operator-scripts\") pod \"5bf5b624-d148-4c17-8824-77512ecaadba\" (UID: \"5bf5b624-d148-4c17-8824-77512ecaadba\") " Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.028623 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qmr2\" (UniqueName: \"kubernetes.io/projected/5bf5b624-d148-4c17-8824-77512ecaadba-kube-api-access-4qmr2\") pod \"5bf5b624-d148-4c17-8824-77512ecaadba\" (UID: \"5bf5b624-d148-4c17-8824-77512ecaadba\") " Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.030342 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4549a6c6-f4a4-463a-8b6e-2a0d7edeae42-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4549a6c6-f4a4-463a-8b6e-2a0d7edeae42" (UID: "4549a6c6-f4a4-463a-8b6e-2a0d7edeae42"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.030675 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d555b18f-0774-4e4c-9b9d-10ee1335d432-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d555b18f-0774-4e4c-9b9d-10ee1335d432" (UID: "d555b18f-0774-4e4c-9b9d-10ee1335d432"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.032398 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b927e3-d3f5-4343-af70-bc2eb39a539c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25b927e3-d3f5-4343-af70-bc2eb39a539c" (UID: "25b927e3-d3f5-4343-af70-bc2eb39a539c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.033871 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb0a658-a2dc-4442-a362-e1a6fd576848-kube-api-access-bk55b" (OuterVolumeSpecName: "kube-api-access-bk55b") pod "4bb0a658-a2dc-4442-a362-e1a6fd576848" (UID: "4bb0a658-a2dc-4442-a362-e1a6fd576848"). InnerVolumeSpecName "kube-api-access-bk55b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.035621 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bf5b624-d148-4c17-8824-77512ecaadba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5bf5b624-d148-4c17-8824-77512ecaadba" (UID: "5bf5b624-d148-4c17-8824-77512ecaadba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.035613 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb0a658-a2dc-4442-a362-e1a6fd576848-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4bb0a658-a2dc-4442-a362-e1a6fd576848" (UID: "4bb0a658-a2dc-4442-a362-e1a6fd576848"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.036368 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25b927e3-d3f5-4343-af70-bc2eb39a539c-kube-api-access-xdwx7" (OuterVolumeSpecName: "kube-api-access-xdwx7") pod "25b927e3-d3f5-4343-af70-bc2eb39a539c" (UID: "25b927e3-d3f5-4343-af70-bc2eb39a539c"). InnerVolumeSpecName "kube-api-access-xdwx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.037906 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d555b18f-0774-4e4c-9b9d-10ee1335d432-kube-api-access-svg47" (OuterVolumeSpecName: "kube-api-access-svg47") pod "d555b18f-0774-4e4c-9b9d-10ee1335d432" (UID: "d555b18f-0774-4e4c-9b9d-10ee1335d432"). InnerVolumeSpecName "kube-api-access-svg47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.038155 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4549a6c6-f4a4-463a-8b6e-2a0d7edeae42-kube-api-access-2t2s7" (OuterVolumeSpecName: "kube-api-access-2t2s7") pod "4549a6c6-f4a4-463a-8b6e-2a0d7edeae42" (UID: "4549a6c6-f4a4-463a-8b6e-2a0d7edeae42"). InnerVolumeSpecName "kube-api-access-2t2s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.038670 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf5b624-d148-4c17-8824-77512ecaadba-kube-api-access-4qmr2" (OuterVolumeSpecName: "kube-api-access-4qmr2") pod "5bf5b624-d148-4c17-8824-77512ecaadba" (UID: "5bf5b624-d148-4c17-8824-77512ecaadba"). InnerVolumeSpecName "kube-api-access-4qmr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.133598 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svg47\" (UniqueName: \"kubernetes.io/projected/d555b18f-0774-4e4c-9b9d-10ee1335d432-kube-api-access-svg47\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.133659 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bb0a658-a2dc-4442-a362-e1a6fd576848-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.133672 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdwx7\" (UniqueName: \"kubernetes.io/projected/25b927e3-d3f5-4343-af70-bc2eb39a539c-kube-api-access-xdwx7\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.133683 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bf5b624-d148-4c17-8824-77512ecaadba-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.133695 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qmr2\" (UniqueName: \"kubernetes.io/projected/5bf5b624-d148-4c17-8824-77512ecaadba-kube-api-access-4qmr2\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.133706 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d555b18f-0774-4e4c-9b9d-10ee1335d432-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.133717 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk55b\" (UniqueName: \"kubernetes.io/projected/4bb0a658-a2dc-4442-a362-e1a6fd576848-kube-api-access-bk55b\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.133729 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25b927e3-d3f5-4343-af70-bc2eb39a539c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.133739 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t2s7\" (UniqueName: \"kubernetes.io/projected/4549a6c6-f4a4-463a-8b6e-2a0d7edeae42-kube-api-access-2t2s7\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.133749 4893 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4549a6c6-f4a4-463a-8b6e-2a0d7edeae42-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.250108 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-a703-account-create-update-r88t7" event={"ID":"25b927e3-d3f5-4343-af70-bc2eb39a539c","Type":"ContainerDied","Data":"b6f44ea91fb4df30cb0ec27151bc1659c1a6199afba8d9b0b315480a8721421e"} Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.250160 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6f44ea91fb4df30cb0ec27151bc1659c1a6199afba8d9b0b315480a8721421e" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.250280 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-a703-account-create-update-r88t7" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.261217 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-vxgfw" event={"ID":"5bf5b624-d148-4c17-8824-77512ecaadba","Type":"ContainerDied","Data":"009de230d09b6cdca74247b5b90b7bd6e4a8d94a0c1000091a4a95ff90f86a7c"} Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.261260 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="009de230d09b6cdca74247b5b90b7bd6e4a8d94a0c1000091a4a95ff90f86a7c" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.261327 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-vxgfw" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.270087 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-fxmm7" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.270099 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-fxmm7" event={"ID":"d555b18f-0774-4e4c-9b9d-10ee1335d432","Type":"ContainerDied","Data":"218eecdff24de2cd67b48d9337b93cd2b92fc3063d1566ac4866104017fca2b4"} Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.270995 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="218eecdff24de2cd67b48d9337b93cd2b92fc3063d1566ac4866104017fca2b4" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.272045 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx" event={"ID":"4549a6c6-f4a4-463a-8b6e-2a0d7edeae42","Type":"ContainerDied","Data":"45c6515459ebdfe103f878e4040592e649f55231a1760f2853c37c6048d97387"} Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.272074 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45c6515459ebdfe103f878e4040592e649f55231a1760f2853c37c6048d97387" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.272128 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.277291 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-hc6tn" event={"ID":"4bb0a658-a2dc-4442-a362-e1a6fd576848","Type":"ContainerDied","Data":"fb2879e579e5a3c61d9c8fe6b942a1824f61f4b10a7485832bb0521670dece62"} Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.277334 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb2879e579e5a3c61d9c8fe6b942a1824f61f4b10a7485832bb0521670dece62" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.277311 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hc6tn" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.279705 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx" event={"ID":"51a939d5-f485-40b5-bc7b-05d3e063db83","Type":"ContainerDied","Data":"3043a1188217f9cfeb38e0b8f357c4a767ff994b608d17cec70a1dbba498e6f6"} Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.279731 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx" Jan 28 15:33:54 crc kubenswrapper[4893]: I0128 15:33:54.279734 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3043a1188217f9cfeb38e0b8f357c4a767ff994b608d17cec70a1dbba498e6f6" Jan 28 15:33:54 crc kubenswrapper[4893]: E0128 15:33:54.905065 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:54 crc kubenswrapper[4893]: E0128 15:33:54.906517 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:54 crc kubenswrapper[4893]: E0128 15:33:54.908065 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:54 crc kubenswrapper[4893]: E0128 15:33:54.908108 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.568528 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w"] Jan 28 15:33:55 crc kubenswrapper[4893]: E0128 15:33:55.568867 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d555b18f-0774-4e4c-9b9d-10ee1335d432" containerName="mariadb-database-create" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.568885 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d555b18f-0774-4e4c-9b9d-10ee1335d432" containerName="mariadb-database-create" Jan 28 15:33:55 crc kubenswrapper[4893]: E0128 15:33:55.568904 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51a939d5-f485-40b5-bc7b-05d3e063db83" containerName="mariadb-account-create-update" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.568911 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="51a939d5-f485-40b5-bc7b-05d3e063db83" containerName="mariadb-account-create-update" Jan 28 15:33:55 crc kubenswrapper[4893]: E0128 15:33:55.568925 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4549a6c6-f4a4-463a-8b6e-2a0d7edeae42" containerName="mariadb-account-create-update" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.568931 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4549a6c6-f4a4-463a-8b6e-2a0d7edeae42" containerName="mariadb-account-create-update" Jan 28 15:33:55 crc kubenswrapper[4893]: E0128 15:33:55.568939 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b927e3-d3f5-4343-af70-bc2eb39a539c" containerName="mariadb-account-create-update" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.568945 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b927e3-d3f5-4343-af70-bc2eb39a539c" containerName="mariadb-account-create-update" Jan 28 15:33:55 crc kubenswrapper[4893]: E0128 15:33:55.568955 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf5b624-d148-4c17-8824-77512ecaadba" containerName="mariadb-database-create" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.568961 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf5b624-d148-4c17-8824-77512ecaadba" containerName="mariadb-database-create" Jan 28 15:33:55 crc kubenswrapper[4893]: E0128 15:33:55.568970 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb0a658-a2dc-4442-a362-e1a6fd576848" containerName="mariadb-database-create" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.568976 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb0a658-a2dc-4442-a362-e1a6fd576848" containerName="mariadb-database-create" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.569116 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="25b927e3-d3f5-4343-af70-bc2eb39a539c" containerName="mariadb-account-create-update" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.569130 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4549a6c6-f4a4-463a-8b6e-2a0d7edeae42" containerName="mariadb-account-create-update" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.569147 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb0a658-a2dc-4442-a362-e1a6fd576848" containerName="mariadb-database-create" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.569156 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="51a939d5-f485-40b5-bc7b-05d3e063db83" containerName="mariadb-account-create-update" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.569166 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d555b18f-0774-4e4c-9b9d-10ee1335d432" containerName="mariadb-database-create" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.569177 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf5b624-d148-4c17-8824-77512ecaadba" containerName="mariadb-database-create" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.569688 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.571888 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.571915 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-nthlc" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.572743 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.582922 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w"] Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.659817 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84c5ebf-c963-4acb-b64f-107efda9798d-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-jr59w\" (UID: \"e84c5ebf-c963-4acb-b64f-107efda9798d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.659953 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88gx7\" (UniqueName: \"kubernetes.io/projected/e84c5ebf-c963-4acb-b64f-107efda9798d-kube-api-access-88gx7\") pod \"nova-kuttl-cell0-conductor-db-sync-jr59w\" (UID: \"e84c5ebf-c963-4acb-b64f-107efda9798d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.660169 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e84c5ebf-c963-4acb-b64f-107efda9798d-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-jr59w\" (UID: \"e84c5ebf-c963-4acb-b64f-107efda9798d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.762564 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88gx7\" (UniqueName: \"kubernetes.io/projected/e84c5ebf-c963-4acb-b64f-107efda9798d-kube-api-access-88gx7\") pod \"nova-kuttl-cell0-conductor-db-sync-jr59w\" (UID: \"e84c5ebf-c963-4acb-b64f-107efda9798d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.762945 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e84c5ebf-c963-4acb-b64f-107efda9798d-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-jr59w\" (UID: \"e84c5ebf-c963-4acb-b64f-107efda9798d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.763018 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84c5ebf-c963-4acb-b64f-107efda9798d-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-jr59w\" (UID: \"e84c5ebf-c963-4acb-b64f-107efda9798d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.767527 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e84c5ebf-c963-4acb-b64f-107efda9798d-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-jr59w\" (UID: \"e84c5ebf-c963-4acb-b64f-107efda9798d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.770241 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84c5ebf-c963-4acb-b64f-107efda9798d-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-jr59w\" (UID: \"e84c5ebf-c963-4acb-b64f-107efda9798d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.786059 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88gx7\" (UniqueName: \"kubernetes.io/projected/e84c5ebf-c963-4acb-b64f-107efda9798d-kube-api-access-88gx7\") pod \"nova-kuttl-cell0-conductor-db-sync-jr59w\" (UID: \"e84c5ebf-c963-4acb-b64f-107efda9798d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" Jan 28 15:33:55 crc kubenswrapper[4893]: I0128 15:33:55.920204 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" Jan 28 15:33:56 crc kubenswrapper[4893]: W0128 15:33:56.343232 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode84c5ebf_c963_4acb_b64f_107efda9798d.slice/crio-91c552cece28b41769474201b5feb1e8f136b2b8fa4d3b566bc92e485d4ff915 WatchSource:0}: Error finding container 91c552cece28b41769474201b5feb1e8f136b2b8fa4d3b566bc92e485d4ff915: Status 404 returned error can't find the container with id 91c552cece28b41769474201b5feb1e8f136b2b8fa4d3b566bc92e485d4ff915 Jan 28 15:33:56 crc kubenswrapper[4893]: I0128 15:33:56.343813 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w"] Jan 28 15:33:57 crc kubenswrapper[4893]: I0128 15:33:57.302877 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" event={"ID":"e84c5ebf-c963-4acb-b64f-107efda9798d","Type":"ContainerStarted","Data":"877bb78ea59beb334968fdcc181ffbb610cd0da23d315de0bcf0e84bdb1f57df"} Jan 28 15:33:57 crc kubenswrapper[4893]: I0128 15:33:57.302934 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" event={"ID":"e84c5ebf-c963-4acb-b64f-107efda9798d","Type":"ContainerStarted","Data":"91c552cece28b41769474201b5feb1e8f136b2b8fa4d3b566bc92e485d4ff915"} Jan 28 15:33:57 crc kubenswrapper[4893]: I0128 15:33:57.324006 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" podStartSLOduration=2.323985386 podStartE2EDuration="2.323985386s" podCreationTimestamp="2026-01-28 15:33:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:33:57.318051475 +0000 UTC m=+1955.091666503" watchObservedRunningTime="2026-01-28 15:33:57.323985386 +0000 UTC m=+1955.097600414" Jan 28 15:33:58 crc kubenswrapper[4893]: E0128 15:33:58.728545 4893 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 15:33:58 crc kubenswrapper[4893]: E0128 15:33:58.728687 4893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data podName:0fdb187d-14cc-4e15-b604-c1f913305e00 nodeName:}" failed. No retries permitted until 2026-01-28 15:34:14.728648843 +0000 UTC m=+1972.502263911 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "0fdb187d-14cc-4e15-b604-c1f913305e00") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 28 15:33:59 crc kubenswrapper[4893]: E0128 15:33:59.905727 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:59 crc kubenswrapper[4893]: E0128 15:33:59.909005 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:59 crc kubenswrapper[4893]: E0128 15:33:59.910128 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:33:59 crc kubenswrapper[4893]: E0128 15:33:59.910163 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:34:01 crc kubenswrapper[4893]: I0128 15:34:01.892336 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:34:01 crc kubenswrapper[4893]: E0128 15:34:01.892704 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:34:02 crc kubenswrapper[4893]: I0128 15:34:02.353335 4893 generic.go:334] "Generic (PLEG): container finished" podID="e84c5ebf-c963-4acb-b64f-107efda9798d" containerID="877bb78ea59beb334968fdcc181ffbb610cd0da23d315de0bcf0e84bdb1f57df" exitCode=0 Jan 28 15:34:02 crc kubenswrapper[4893]: I0128 15:34:02.353390 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" event={"ID":"e84c5ebf-c963-4acb-b64f-107efda9798d","Type":"ContainerDied","Data":"877bb78ea59beb334968fdcc181ffbb610cd0da23d315de0bcf0e84bdb1f57df"} Jan 28 15:34:03 crc kubenswrapper[4893]: I0128 15:34:03.677899 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" Jan 28 15:34:03 crc kubenswrapper[4893]: I0128 15:34:03.819008 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e84c5ebf-c963-4acb-b64f-107efda9798d-scripts\") pod \"e84c5ebf-c963-4acb-b64f-107efda9798d\" (UID: \"e84c5ebf-c963-4acb-b64f-107efda9798d\") " Jan 28 15:34:03 crc kubenswrapper[4893]: I0128 15:34:03.819094 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88gx7\" (UniqueName: \"kubernetes.io/projected/e84c5ebf-c963-4acb-b64f-107efda9798d-kube-api-access-88gx7\") pod \"e84c5ebf-c963-4acb-b64f-107efda9798d\" (UID: \"e84c5ebf-c963-4acb-b64f-107efda9798d\") " Jan 28 15:34:03 crc kubenswrapper[4893]: I0128 15:34:03.819191 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84c5ebf-c963-4acb-b64f-107efda9798d-config-data\") pod \"e84c5ebf-c963-4acb-b64f-107efda9798d\" (UID: \"e84c5ebf-c963-4acb-b64f-107efda9798d\") " Jan 28 15:34:03 crc kubenswrapper[4893]: I0128 15:34:03.825922 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84c5ebf-c963-4acb-b64f-107efda9798d-kube-api-access-88gx7" (OuterVolumeSpecName: "kube-api-access-88gx7") pod "e84c5ebf-c963-4acb-b64f-107efda9798d" (UID: "e84c5ebf-c963-4acb-b64f-107efda9798d"). InnerVolumeSpecName "kube-api-access-88gx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:03 crc kubenswrapper[4893]: I0128 15:34:03.826631 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e84c5ebf-c963-4acb-b64f-107efda9798d-scripts" (OuterVolumeSpecName: "scripts") pod "e84c5ebf-c963-4acb-b64f-107efda9798d" (UID: "e84c5ebf-c963-4acb-b64f-107efda9798d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:03 crc kubenswrapper[4893]: I0128 15:34:03.850048 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e84c5ebf-c963-4acb-b64f-107efda9798d-config-data" (OuterVolumeSpecName: "config-data") pod "e84c5ebf-c963-4acb-b64f-107efda9798d" (UID: "e84c5ebf-c963-4acb-b64f-107efda9798d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:03 crc kubenswrapper[4893]: I0128 15:34:03.920747 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88gx7\" (UniqueName: \"kubernetes.io/projected/e84c5ebf-c963-4acb-b64f-107efda9798d-kube-api-access-88gx7\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:03 crc kubenswrapper[4893]: I0128 15:34:03.920791 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84c5ebf-c963-4acb-b64f-107efda9798d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:03 crc kubenswrapper[4893]: I0128 15:34:03.920803 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e84c5ebf-c963-4acb-b64f-107efda9798d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.369096 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" event={"ID":"e84c5ebf-c963-4acb-b64f-107efda9798d","Type":"ContainerDied","Data":"91c552cece28b41769474201b5feb1e8f136b2b8fa4d3b566bc92e485d4ff915"} Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.369139 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91c552cece28b41769474201b5feb1e8f136b2b8fa4d3b566bc92e485d4ff915" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.369142 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.459507 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:34:04 crc kubenswrapper[4893]: E0128 15:34:04.459922 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84c5ebf-c963-4acb-b64f-107efda9798d" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.459948 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84c5ebf-c963-4acb-b64f-107efda9798d" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.460136 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="e84c5ebf-c963-4acb-b64f-107efda9798d" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.460762 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.463559 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-nthlc" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.463807 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.469282 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:34:04 crc kubenswrapper[4893]: E0128 15:34:04.487305 4893 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode84c5ebf_c963_4acb_b64f_107efda9798d.slice\": RecentStats: unable to find data in memory cache]" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.632587 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-676bj\" (UniqueName: \"kubernetes.io/projected/22539c1b-d8a1-4f7d-b202-b33f849a21b4-kube-api-access-676bj\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"22539c1b-d8a1-4f7d-b202-b33f849a21b4\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.632647 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22539c1b-d8a1-4f7d-b202-b33f849a21b4-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"22539c1b-d8a1-4f7d-b202-b33f849a21b4\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.734420 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-676bj\" (UniqueName: \"kubernetes.io/projected/22539c1b-d8a1-4f7d-b202-b33f849a21b4-kube-api-access-676bj\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"22539c1b-d8a1-4f7d-b202-b33f849a21b4\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.734514 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22539c1b-d8a1-4f7d-b202-b33f849a21b4-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"22539c1b-d8a1-4f7d-b202-b33f849a21b4\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.739923 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22539c1b-d8a1-4f7d-b202-b33f849a21b4-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"22539c1b-d8a1-4f7d-b202-b33f849a21b4\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.752821 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-676bj\" (UniqueName: \"kubernetes.io/projected/22539c1b-d8a1-4f7d-b202-b33f849a21b4-kube-api-access-676bj\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"22539c1b-d8a1-4f7d-b202-b33f849a21b4\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:34:04 crc kubenswrapper[4893]: I0128 15:34:04.793050 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:34:04 crc kubenswrapper[4893]: E0128 15:34:04.907801 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:34:04 crc kubenswrapper[4893]: E0128 15:34:04.912484 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:34:04 crc kubenswrapper[4893]: E0128 15:34:04.917810 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:34:04 crc kubenswrapper[4893]: E0128 15:34:04.917934 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:34:05 crc kubenswrapper[4893]: I0128 15:34:05.220172 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 28 15:34:05 crc kubenswrapper[4893]: I0128 15:34:05.383757 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"22539c1b-d8a1-4f7d-b202-b33f849a21b4","Type":"ContainerStarted","Data":"88a7be1adbaa83e7e1fcc9b7ba338bd595cb4a494c96319025d687e9a237afe1"} Jan 28 15:34:06 crc kubenswrapper[4893]: I0128 15:34:06.399962 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"22539c1b-d8a1-4f7d-b202-b33f849a21b4","Type":"ContainerStarted","Data":"d273cb16f4a3f4e324aca3e9033472f19a0368f3be36a05dfb2bd679aa55457e"} Jan 28 15:34:06 crc kubenswrapper[4893]: I0128 15:34:06.400561 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:34:06 crc kubenswrapper[4893]: I0128 15:34:06.425794 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.425766884 podStartE2EDuration="2.425766884s" podCreationTimestamp="2026-01-28 15:34:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:06.4197447 +0000 UTC m=+1964.193359768" watchObservedRunningTime="2026-01-28 15:34:06.425766884 +0000 UTC m=+1964.199381912" Jan 28 15:34:09 crc kubenswrapper[4893]: E0128 15:34:09.905931 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:34:09 crc kubenswrapper[4893]: E0128 15:34:09.909632 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:34:09 crc kubenswrapper[4893]: E0128 15:34:09.911714 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 28 15:34:09 crc kubenswrapper[4893]: E0128 15:34:09.911807 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:34:12 crc kubenswrapper[4893]: I0128 15:34:12.897103 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:34:13 crc kubenswrapper[4893]: I0128 15:34:13.460424 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerStarted","Data":"d1e89d3b5214b1e2076651a4fdac0f9f4db53c16fe20d6f51f420c4a7e4e5bf5"} Jan 28 15:34:14 crc kubenswrapper[4893]: I0128 15:34:14.473739 4893 generic.go:334] "Generic (PLEG): container finished" podID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" exitCode=137 Jan 28 15:34:14 crc kubenswrapper[4893]: I0128 15:34:14.473832 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"0fdb187d-14cc-4e15-b604-c1f913305e00","Type":"ContainerDied","Data":"19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8"} Jan 28 15:34:14 crc kubenswrapper[4893]: I0128 15:34:14.474276 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"0fdb187d-14cc-4e15-b604-c1f913305e00","Type":"ContainerDied","Data":"3bdd204cdda95954e4937cac9acb50102c710afa535de078c0d3c6539019954a"} Jan 28 15:34:14 crc kubenswrapper[4893]: I0128 15:34:14.474294 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bdd204cdda95954e4937cac9acb50102c710afa535de078c0d3c6539019954a" Jan 28 15:34:14 crc kubenswrapper[4893]: I0128 15:34:14.474311 4893 scope.go:117] "RemoveContainer" containerID="1c0fba30eaf353185dc124d17a6c7a39650234d5dd472d3efdb30b10bb6e1a85" Jan 28 15:34:14 crc kubenswrapper[4893]: I0128 15:34:14.565540 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:34:14 crc kubenswrapper[4893]: I0128 15:34:14.709215 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ptv2\" (UniqueName: \"kubernetes.io/projected/0fdb187d-14cc-4e15-b604-c1f913305e00-kube-api-access-6ptv2\") pod \"0fdb187d-14cc-4e15-b604-c1f913305e00\" (UID: \"0fdb187d-14cc-4e15-b604-c1f913305e00\") " Jan 28 15:34:14 crc kubenswrapper[4893]: I0128 15:34:14.709319 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data\") pod \"0fdb187d-14cc-4e15-b604-c1f913305e00\" (UID: \"0fdb187d-14cc-4e15-b604-c1f913305e00\") " Jan 28 15:34:14 crc kubenswrapper[4893]: I0128 15:34:14.715648 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdb187d-14cc-4e15-b604-c1f913305e00-kube-api-access-6ptv2" (OuterVolumeSpecName: "kube-api-access-6ptv2") pod "0fdb187d-14cc-4e15-b604-c1f913305e00" (UID: "0fdb187d-14cc-4e15-b604-c1f913305e00"). InnerVolumeSpecName "kube-api-access-6ptv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:14 crc kubenswrapper[4893]: I0128 15:34:14.734362 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data" (OuterVolumeSpecName: "config-data") pod "0fdb187d-14cc-4e15-b604-c1f913305e00" (UID: "0fdb187d-14cc-4e15-b604-c1f913305e00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:14 crc kubenswrapper[4893]: I0128 15:34:14.810719 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ptv2\" (UniqueName: \"kubernetes.io/projected/0fdb187d-14cc-4e15-b604-c1f913305e00-kube-api-access-6ptv2\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:14 crc kubenswrapper[4893]: I0128 15:34:14.810761 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fdb187d-14cc-4e15-b604-c1f913305e00-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:14 crc kubenswrapper[4893]: I0128 15:34:14.826378 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.344042 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl"] Jan 28 15:34:15 crc kubenswrapper[4893]: E0128 15:34:15.344958 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.345049 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:34:15 crc kubenswrapper[4893]: E0128 15:34:15.345130 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.345179 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.345422 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.345531 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.345593 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.346186 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.350229 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.351238 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.356573 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl"] Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.463516 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:15 crc kubenswrapper[4893]: E0128 15:34:15.463865 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.463883 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.464869 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.483010 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.485090 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.489921 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.519390 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lch7j\" (UniqueName: \"kubernetes.io/projected/79f84931-160b-409c-bb0b-193fd8988158-kube-api-access-lch7j\") pod \"nova-kuttl-cell0-cell-mapping-2bxvl\" (UID: \"79f84931-160b-409c-bb0b-193fd8988158\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.519432 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f84931-160b-409c-bb0b-193fd8988158-scripts\") pod \"nova-kuttl-cell0-cell-mapping-2bxvl\" (UID: \"79f84931-160b-409c-bb0b-193fd8988158\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.519457 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f84931-160b-409c-bb0b-193fd8988158-config-data\") pod \"nova-kuttl-cell0-cell-mapping-2bxvl\" (UID: \"79f84931-160b-409c-bb0b-193fd8988158\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.523405 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.545833 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.557751 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.559105 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.562354 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.563692 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.620766 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pnjm\" (UniqueName: \"kubernetes.io/projected/fbd6cad2-10dc-443a-b1eb-1c537d618188-kube-api-access-2pnjm\") pod \"nova-kuttl-api-0\" (UID: \"fbd6cad2-10dc-443a-b1eb-1c537d618188\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.621882 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbd6cad2-10dc-443a-b1eb-1c537d618188-logs\") pod \"nova-kuttl-api-0\" (UID: \"fbd6cad2-10dc-443a-b1eb-1c537d618188\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.621955 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd6cad2-10dc-443a-b1eb-1c537d618188-config-data\") pod \"nova-kuttl-api-0\" (UID: \"fbd6cad2-10dc-443a-b1eb-1c537d618188\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.622016 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lch7j\" (UniqueName: \"kubernetes.io/projected/79f84931-160b-409c-bb0b-193fd8988158-kube-api-access-lch7j\") pod \"nova-kuttl-cell0-cell-mapping-2bxvl\" (UID: \"79f84931-160b-409c-bb0b-193fd8988158\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.622050 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f84931-160b-409c-bb0b-193fd8988158-scripts\") pod \"nova-kuttl-cell0-cell-mapping-2bxvl\" (UID: \"79f84931-160b-409c-bb0b-193fd8988158\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.622084 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f84931-160b-409c-bb0b-193fd8988158-config-data\") pod \"nova-kuttl-cell0-cell-mapping-2bxvl\" (UID: \"79f84931-160b-409c-bb0b-193fd8988158\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.628087 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f84931-160b-409c-bb0b-193fd8988158-scripts\") pod \"nova-kuttl-cell0-cell-mapping-2bxvl\" (UID: \"79f84931-160b-409c-bb0b-193fd8988158\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.633760 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f84931-160b-409c-bb0b-193fd8988158-config-data\") pod \"nova-kuttl-cell0-cell-mapping-2bxvl\" (UID: \"79f84931-160b-409c-bb0b-193fd8988158\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.648405 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.651823 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.655577 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.670001 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lch7j\" (UniqueName: \"kubernetes.io/projected/79f84931-160b-409c-bb0b-193fd8988158-kube-api-access-lch7j\") pod \"nova-kuttl-cell0-cell-mapping-2bxvl\" (UID: \"79f84931-160b-409c-bb0b-193fd8988158\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.678327 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.729232 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4244\" (UniqueName: \"kubernetes.io/projected/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-kube-api-access-w4244\") pod \"nova-kuttl-metadata-0\" (UID: \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.729595 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbd6cad2-10dc-443a-b1eb-1c537d618188-logs\") pod \"nova-kuttl-api-0\" (UID: \"fbd6cad2-10dc-443a-b1eb-1c537d618188\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.729652 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd6cad2-10dc-443a-b1eb-1c537d618188-config-data\") pod \"nova-kuttl-api-0\" (UID: \"fbd6cad2-10dc-443a-b1eb-1c537d618188\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.729687 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfdfs\" (UniqueName: \"kubernetes.io/projected/90e30875-ed7b-4c7e-b8ed-3deb340cfd2b-kube-api-access-sfdfs\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"90e30875-ed7b-4c7e-b8ed-3deb340cfd2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.729712 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.729749 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.729779 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pnjm\" (UniqueName: \"kubernetes.io/projected/fbd6cad2-10dc-443a-b1eb-1c537d618188-kube-api-access-2pnjm\") pod \"nova-kuttl-api-0\" (UID: \"fbd6cad2-10dc-443a-b1eb-1c537d618188\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.729854 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90e30875-ed7b-4c7e-b8ed-3deb340cfd2b-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"90e30875-ed7b-4c7e-b8ed-3deb340cfd2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.730365 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbd6cad2-10dc-443a-b1eb-1c537d618188-logs\") pod \"nova-kuttl-api-0\" (UID: \"fbd6cad2-10dc-443a-b1eb-1c537d618188\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.752421 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.757125 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.760411 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd6cad2-10dc-443a-b1eb-1c537d618188-config-data\") pod \"nova-kuttl-api-0\" (UID: \"fbd6cad2-10dc-443a-b1eb-1c537d618188\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.766681 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.782935 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pnjm\" (UniqueName: \"kubernetes.io/projected/fbd6cad2-10dc-443a-b1eb-1c537d618188-kube-api-access-2pnjm\") pod \"nova-kuttl-api-0\" (UID: \"fbd6cad2-10dc-443a-b1eb-1c537d618188\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.794232 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.832512 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4244\" (UniqueName: \"kubernetes.io/projected/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-kube-api-access-w4244\") pod \"nova-kuttl-metadata-0\" (UID: \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.832640 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfdfs\" (UniqueName: \"kubernetes.io/projected/90e30875-ed7b-4c7e-b8ed-3deb340cfd2b-kube-api-access-sfdfs\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"90e30875-ed7b-4c7e-b8ed-3deb340cfd2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.832674 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.832715 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.832800 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90e30875-ed7b-4c7e-b8ed-3deb340cfd2b-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"90e30875-ed7b-4c7e-b8ed-3deb340cfd2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.841208 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.841870 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90e30875-ed7b-4c7e-b8ed-3deb340cfd2b-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"90e30875-ed7b-4c7e-b8ed-3deb340cfd2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.845229 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.858165 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfdfs\" (UniqueName: \"kubernetes.io/projected/90e30875-ed7b-4c7e-b8ed-3deb340cfd2b-kube-api-access-sfdfs\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"90e30875-ed7b-4c7e-b8ed-3deb340cfd2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.869066 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4244\" (UniqueName: \"kubernetes.io/projected/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-kube-api-access-w4244\") pod \"nova-kuttl-metadata-0\" (UID: \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.882173 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.938720 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e794c5-4ed4-4d5b-9698-a0b1fea08552-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"91e794c5-4ed4-4d5b-9698-a0b1fea08552\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.939060 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5kz8\" (UniqueName: \"kubernetes.io/projected/91e794c5-4ed4-4d5b-9698-a0b1fea08552-kube-api-access-q5kz8\") pod \"nova-kuttl-scheduler-0\" (UID: \"91e794c5-4ed4-4d5b-9698-a0b1fea08552\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:15 crc kubenswrapper[4893]: I0128 15:34:15.963461 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.040875 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5kz8\" (UniqueName: \"kubernetes.io/projected/91e794c5-4ed4-4d5b-9698-a0b1fea08552-kube-api-access-q5kz8\") pod \"nova-kuttl-scheduler-0\" (UID: \"91e794c5-4ed4-4d5b-9698-a0b1fea08552\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.041099 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e794c5-4ed4-4d5b-9698-a0b1fea08552-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"91e794c5-4ed4-4d5b-9698-a0b1fea08552\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.059383 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e794c5-4ed4-4d5b-9698-a0b1fea08552-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"91e794c5-4ed4-4d5b-9698-a0b1fea08552\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.082714 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.083820 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.088317 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5kz8\" (UniqueName: \"kubernetes.io/projected/91e794c5-4ed4-4d5b-9698-a0b1fea08552-kube-api-access-q5kz8\") pod \"nova-kuttl-scheduler-0\" (UID: \"91e794c5-4ed4-4d5b-9698-a0b1fea08552\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.130106 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:16 crc kubenswrapper[4893]: W0128 15:34:16.465594 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90e30875_ed7b_4c7e_b8ed_3deb340cfd2b.slice/crio-3eec7704395942526b591378338ea08b6f6bfd47be10ab6da37b2b71a9c150cb WatchSource:0}: Error finding container 3eec7704395942526b591378338ea08b6f6bfd47be10ab6da37b2b71a9c150cb: Status 404 returned error can't find the container with id 3eec7704395942526b591378338ea08b6f6bfd47be10ab6da37b2b71a9c150cb Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.466738 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.495527 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"90e30875-ed7b-4c7e-b8ed-3deb340cfd2b","Type":"ContainerStarted","Data":"3eec7704395942526b591378338ea08b6f6bfd47be10ab6da37b2b71a9c150cb"} Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.696378 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.711188 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl"] Jan 28 15:34:16 crc kubenswrapper[4893]: W0128 15:34:16.735936 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79f84931_160b_409c_bb0b_193fd8988158.slice/crio-6f38ddea16f6a575ff6b02ee452a356a2b9c4a9969cecb951b08522f186610aa WatchSource:0}: Error finding container 6f38ddea16f6a575ff6b02ee452a356a2b9c4a9969cecb951b08522f186610aa: Status 404 returned error can't find the container with id 6f38ddea16f6a575ff6b02ee452a356a2b9c4a9969cecb951b08522f186610aa Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.742379 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:16 crc kubenswrapper[4893]: W0128 15:34:16.762911 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbd6cad2_10dc_443a_b1eb_1c537d618188.slice/crio-cc8844220f8ba20e9365bfd1cff60fa24532e770880bcd9dc83c7cc26e4f0872 WatchSource:0}: Error finding container cc8844220f8ba20e9365bfd1cff60fa24532e770880bcd9dc83c7cc26e4f0872: Status 404 returned error can't find the container with id cc8844220f8ba20e9365bfd1cff60fa24532e770880bcd9dc83c7cc26e4f0872 Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.764099 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk"] Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.765284 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.768985 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.769174 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.773314 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk"] Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.860156 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpcjb\" (UniqueName: \"kubernetes.io/projected/5788fc83-55a9-489b-b094-e6a36fe58124-kube-api-access-wpcjb\") pod \"nova-kuttl-cell1-conductor-db-sync-9kfvk\" (UID: \"5788fc83-55a9-489b-b094-e6a36fe58124\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.860536 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5788fc83-55a9-489b-b094-e6a36fe58124-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-9kfvk\" (UID: \"5788fc83-55a9-489b-b094-e6a36fe58124\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.860614 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5788fc83-55a9-489b-b094-e6a36fe58124-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-9kfvk\" (UID: \"5788fc83-55a9-489b-b094-e6a36fe58124\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.906377 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fdb187d-14cc-4e15-b604-c1f913305e00" path="/var/lib/kubelet/pods/0fdb187d-14cc-4e15-b604-c1f913305e00/volumes" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.961822 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5788fc83-55a9-489b-b094-e6a36fe58124-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-9kfvk\" (UID: \"5788fc83-55a9-489b-b094-e6a36fe58124\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.961976 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5788fc83-55a9-489b-b094-e6a36fe58124-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-9kfvk\" (UID: \"5788fc83-55a9-489b-b094-e6a36fe58124\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.962035 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpcjb\" (UniqueName: \"kubernetes.io/projected/5788fc83-55a9-489b-b094-e6a36fe58124-kube-api-access-wpcjb\") pod \"nova-kuttl-cell1-conductor-db-sync-9kfvk\" (UID: \"5788fc83-55a9-489b-b094-e6a36fe58124\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" Jan 28 15:34:16 crc kubenswrapper[4893]: W0128 15:34:16.963974 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd27a2427_c1e3_44ef_85f2_42fa4d95aa73.slice/crio-330fe987b4b4884bf63e024341f79089e469b87deb40bf32e1f7bf72009be37c WatchSource:0}: Error finding container 330fe987b4b4884bf63e024341f79089e469b87deb40bf32e1f7bf72009be37c: Status 404 returned error can't find the container with id 330fe987b4b4884bf63e024341f79089e469b87deb40bf32e1f7bf72009be37c Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.966707 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.967339 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5788fc83-55a9-489b-b094-e6a36fe58124-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-9kfvk\" (UID: \"5788fc83-55a9-489b-b094-e6a36fe58124\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.967365 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5788fc83-55a9-489b-b094-e6a36fe58124-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-9kfvk\" (UID: \"5788fc83-55a9-489b-b094-e6a36fe58124\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" Jan 28 15:34:16 crc kubenswrapper[4893]: I0128 15:34:16.983723 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpcjb\" (UniqueName: \"kubernetes.io/projected/5788fc83-55a9-489b-b094-e6a36fe58124-kube-api-access-wpcjb\") pod \"nova-kuttl-cell1-conductor-db-sync-9kfvk\" (UID: \"5788fc83-55a9-489b-b094-e6a36fe58124\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.090835 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.506384 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" event={"ID":"79f84931-160b-409c-bb0b-193fd8988158","Type":"ContainerStarted","Data":"91ee63ce9b5c9bd1a06218d6a8da96c13442369128814ee61c62d7597ef4bd42"} Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.506726 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" event={"ID":"79f84931-160b-409c-bb0b-193fd8988158","Type":"ContainerStarted","Data":"6f38ddea16f6a575ff6b02ee452a356a2b9c4a9969cecb951b08522f186610aa"} Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.508796 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"d27a2427-c1e3-44ef-85f2-42fa4d95aa73","Type":"ContainerStarted","Data":"45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738"} Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.508825 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"d27a2427-c1e3-44ef-85f2-42fa4d95aa73","Type":"ContainerStarted","Data":"67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588"} Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.508836 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"d27a2427-c1e3-44ef-85f2-42fa4d95aa73","Type":"ContainerStarted","Data":"330fe987b4b4884bf63e024341f79089e469b87deb40bf32e1f7bf72009be37c"} Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.511268 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"91e794c5-4ed4-4d5b-9698-a0b1fea08552","Type":"ContainerStarted","Data":"d8ef1f2a674f89985432d61e520009c530917236e755060868a20e3aaac72496"} Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.511298 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"91e794c5-4ed4-4d5b-9698-a0b1fea08552","Type":"ContainerStarted","Data":"f5421ddc45c93ff961e7c5b69a79ec5fc5ef81918a877cd570ec41a7f57fa1fe"} Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.514460 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fbd6cad2-10dc-443a-b1eb-1c537d618188","Type":"ContainerStarted","Data":"df2d9ffa2b19e393be0906a8dc2f6ad7934dfebad271a2b3af36919a34d67118"} Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.514510 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fbd6cad2-10dc-443a-b1eb-1c537d618188","Type":"ContainerStarted","Data":"eaab9986ccc76aa74f7b4c96f80e04762122fae3005c7102dad90b78f38ad1bf"} Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.514525 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fbd6cad2-10dc-443a-b1eb-1c537d618188","Type":"ContainerStarted","Data":"cc8844220f8ba20e9365bfd1cff60fa24532e770880bcd9dc83c7cc26e4f0872"} Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.516766 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"90e30875-ed7b-4c7e-b8ed-3deb340cfd2b","Type":"ContainerStarted","Data":"6da8ee762251547dd83cc919cb572b56c889ae8170c9ef564db9e5014d4ef858"} Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.523973 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" podStartSLOduration=2.523905029 podStartE2EDuration="2.523905029s" podCreationTimestamp="2026-01-28 15:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:17.521796801 +0000 UTC m=+1975.295411829" watchObservedRunningTime="2026-01-28 15:34:17.523905029 +0000 UTC m=+1975.297520057" Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.554991 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.554964823 podStartE2EDuration="2.554964823s" podCreationTimestamp="2026-01-28 15:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:17.542217327 +0000 UTC m=+1975.315832355" watchObservedRunningTime="2026-01-28 15:34:17.554964823 +0000 UTC m=+1975.328579851" Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.566686 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.566667931 podStartE2EDuration="2.566667931s" podCreationTimestamp="2026-01-28 15:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:17.558972192 +0000 UTC m=+1975.332587230" watchObservedRunningTime="2026-01-28 15:34:17.566667931 +0000 UTC m=+1975.340282949" Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.578651 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.578635396 podStartE2EDuration="2.578635396s" podCreationTimestamp="2026-01-28 15:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:17.575747828 +0000 UTC m=+1975.349362866" watchObservedRunningTime="2026-01-28 15:34:17.578635396 +0000 UTC m=+1975.352250424" Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.593026 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.593012527 podStartE2EDuration="2.593012527s" podCreationTimestamp="2026-01-28 15:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:17.591604778 +0000 UTC m=+1975.365219826" watchObservedRunningTime="2026-01-28 15:34:17.593012527 +0000 UTC m=+1975.366627555" Jan 28 15:34:17 crc kubenswrapper[4893]: I0128 15:34:17.634560 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk"] Jan 28 15:34:17 crc kubenswrapper[4893]: W0128 15:34:17.638631 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5788fc83_55a9_489b_b094_e6a36fe58124.slice/crio-3e30b4b0538b31ade73c44c9a8722d7ab32998baaa8d9a77846189755529aa56 WatchSource:0}: Error finding container 3e30b4b0538b31ade73c44c9a8722d7ab32998baaa8d9a77846189755529aa56: Status 404 returned error can't find the container with id 3e30b4b0538b31ade73c44c9a8722d7ab32998baaa8d9a77846189755529aa56 Jan 28 15:34:18 crc kubenswrapper[4893]: I0128 15:34:18.527675 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" event={"ID":"5788fc83-55a9-489b-b094-e6a36fe58124","Type":"ContainerStarted","Data":"80ac0038f893c1b3f591dd9edac2e4896a0c1167cae42df06d7833373f00ec78"} Jan 28 15:34:18 crc kubenswrapper[4893]: I0128 15:34:18.528247 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" event={"ID":"5788fc83-55a9-489b-b094-e6a36fe58124","Type":"ContainerStarted","Data":"3e30b4b0538b31ade73c44c9a8722d7ab32998baaa8d9a77846189755529aa56"} Jan 28 15:34:18 crc kubenswrapper[4893]: I0128 15:34:18.547580 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" podStartSLOduration=2.547559459 podStartE2EDuration="2.547559459s" podCreationTimestamp="2026-01-28 15:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:18.542965295 +0000 UTC m=+1976.316580333" watchObservedRunningTime="2026-01-28 15:34:18.547559459 +0000 UTC m=+1976.321174487" Jan 28 15:34:20 crc kubenswrapper[4893]: I0128 15:34:20.882424 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:34:21 crc kubenswrapper[4893]: I0128 15:34:21.086513 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:21 crc kubenswrapper[4893]: I0128 15:34:21.087643 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:21 crc kubenswrapper[4893]: I0128 15:34:21.131854 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:21 crc kubenswrapper[4893]: I0128 15:34:21.554517 4893 generic.go:334] "Generic (PLEG): container finished" podID="5788fc83-55a9-489b-b094-e6a36fe58124" containerID="80ac0038f893c1b3f591dd9edac2e4896a0c1167cae42df06d7833373f00ec78" exitCode=0 Jan 28 15:34:21 crc kubenswrapper[4893]: I0128 15:34:21.554614 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" event={"ID":"5788fc83-55a9-489b-b094-e6a36fe58124","Type":"ContainerDied","Data":"80ac0038f893c1b3f591dd9edac2e4896a0c1167cae42df06d7833373f00ec78"} Jan 28 15:34:22 crc kubenswrapper[4893]: I0128 15:34:22.908654 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.064198 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5788fc83-55a9-489b-b094-e6a36fe58124-config-data\") pod \"5788fc83-55a9-489b-b094-e6a36fe58124\" (UID: \"5788fc83-55a9-489b-b094-e6a36fe58124\") " Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.064357 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5788fc83-55a9-489b-b094-e6a36fe58124-scripts\") pod \"5788fc83-55a9-489b-b094-e6a36fe58124\" (UID: \"5788fc83-55a9-489b-b094-e6a36fe58124\") " Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.064441 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpcjb\" (UniqueName: \"kubernetes.io/projected/5788fc83-55a9-489b-b094-e6a36fe58124-kube-api-access-wpcjb\") pod \"5788fc83-55a9-489b-b094-e6a36fe58124\" (UID: \"5788fc83-55a9-489b-b094-e6a36fe58124\") " Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.069634 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5788fc83-55a9-489b-b094-e6a36fe58124-kube-api-access-wpcjb" (OuterVolumeSpecName: "kube-api-access-wpcjb") pod "5788fc83-55a9-489b-b094-e6a36fe58124" (UID: "5788fc83-55a9-489b-b094-e6a36fe58124"). InnerVolumeSpecName "kube-api-access-wpcjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.070163 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5788fc83-55a9-489b-b094-e6a36fe58124-scripts" (OuterVolumeSpecName: "scripts") pod "5788fc83-55a9-489b-b094-e6a36fe58124" (UID: "5788fc83-55a9-489b-b094-e6a36fe58124"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.088164 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5788fc83-55a9-489b-b094-e6a36fe58124-config-data" (OuterVolumeSpecName: "config-data") pod "5788fc83-55a9-489b-b094-e6a36fe58124" (UID: "5788fc83-55a9-489b-b094-e6a36fe58124"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.169642 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpcjb\" (UniqueName: \"kubernetes.io/projected/5788fc83-55a9-489b-b094-e6a36fe58124-kube-api-access-wpcjb\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.169679 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5788fc83-55a9-489b-b094-e6a36fe58124-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.169691 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5788fc83-55a9-489b-b094-e6a36fe58124-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.570603 4893 generic.go:334] "Generic (PLEG): container finished" podID="79f84931-160b-409c-bb0b-193fd8988158" containerID="91ee63ce9b5c9bd1a06218d6a8da96c13442369128814ee61c62d7597ef4bd42" exitCode=0 Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.570704 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" event={"ID":"79f84931-160b-409c-bb0b-193fd8988158","Type":"ContainerDied","Data":"91ee63ce9b5c9bd1a06218d6a8da96c13442369128814ee61c62d7597ef4bd42"} Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.572164 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" event={"ID":"5788fc83-55a9-489b-b094-e6a36fe58124","Type":"ContainerDied","Data":"3e30b4b0538b31ade73c44c9a8722d7ab32998baaa8d9a77846189755529aa56"} Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.572191 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e30b4b0538b31ade73c44c9a8722d7ab32998baaa8d9a77846189755529aa56" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.572241 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.652379 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:34:23 crc kubenswrapper[4893]: E0128 15:34:23.652750 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5788fc83-55a9-489b-b094-e6a36fe58124" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.652768 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="5788fc83-55a9-489b-b094-e6a36fe58124" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.652943 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="5788fc83-55a9-489b-b094-e6a36fe58124" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.653462 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.663278 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.663703 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.778669 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cptxz\" (UniqueName: \"kubernetes.io/projected/f05773d2-58b3-4e11-9962-45502872c375-kube-api-access-cptxz\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"f05773d2-58b3-4e11-9962-45502872c375\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.778717 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05773d2-58b3-4e11-9962-45502872c375-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"f05773d2-58b3-4e11-9962-45502872c375\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.879950 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05773d2-58b3-4e11-9962-45502872c375-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"f05773d2-58b3-4e11-9962-45502872c375\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.880344 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cptxz\" (UniqueName: \"kubernetes.io/projected/f05773d2-58b3-4e11-9962-45502872c375-kube-api-access-cptxz\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"f05773d2-58b3-4e11-9962-45502872c375\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.887094 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f05773d2-58b3-4e11-9962-45502872c375-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"f05773d2-58b3-4e11-9962-45502872c375\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.896302 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cptxz\" (UniqueName: \"kubernetes.io/projected/f05773d2-58b3-4e11-9962-45502872c375-kube-api-access-cptxz\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"f05773d2-58b3-4e11-9962-45502872c375\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:34:23 crc kubenswrapper[4893]: I0128 15:34:23.974679 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:34:24 crc kubenswrapper[4893]: I0128 15:34:24.590197 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 28 15:34:24 crc kubenswrapper[4893]: I0128 15:34:24.858022 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.000653 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f84931-160b-409c-bb0b-193fd8988158-config-data\") pod \"79f84931-160b-409c-bb0b-193fd8988158\" (UID: \"79f84931-160b-409c-bb0b-193fd8988158\") " Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.000824 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lch7j\" (UniqueName: \"kubernetes.io/projected/79f84931-160b-409c-bb0b-193fd8988158-kube-api-access-lch7j\") pod \"79f84931-160b-409c-bb0b-193fd8988158\" (UID: \"79f84931-160b-409c-bb0b-193fd8988158\") " Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.000980 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f84931-160b-409c-bb0b-193fd8988158-scripts\") pod \"79f84931-160b-409c-bb0b-193fd8988158\" (UID: \"79f84931-160b-409c-bb0b-193fd8988158\") " Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.005803 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f84931-160b-409c-bb0b-193fd8988158-kube-api-access-lch7j" (OuterVolumeSpecName: "kube-api-access-lch7j") pod "79f84931-160b-409c-bb0b-193fd8988158" (UID: "79f84931-160b-409c-bb0b-193fd8988158"). InnerVolumeSpecName "kube-api-access-lch7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.023803 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f84931-160b-409c-bb0b-193fd8988158-scripts" (OuterVolumeSpecName: "scripts") pod "79f84931-160b-409c-bb0b-193fd8988158" (UID: "79f84931-160b-409c-bb0b-193fd8988158"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.024203 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f84931-160b-409c-bb0b-193fd8988158-config-data" (OuterVolumeSpecName: "config-data") pod "79f84931-160b-409c-bb0b-193fd8988158" (UID: "79f84931-160b-409c-bb0b-193fd8988158"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.103286 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lch7j\" (UniqueName: \"kubernetes.io/projected/79f84931-160b-409c-bb0b-193fd8988158-kube-api-access-lch7j\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.103330 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f84931-160b-409c-bb0b-193fd8988158-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.103342 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f84931-160b-409c-bb0b-193fd8988158-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.596257 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"f05773d2-58b3-4e11-9962-45502872c375","Type":"ContainerStarted","Data":"427eef1853e9898113b1d6697292f7fae98f41bca69fd9d349a636b579dad904"} Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.596336 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"f05773d2-58b3-4e11-9962-45502872c375","Type":"ContainerStarted","Data":"1daa58f5c31dd332679f8357f06e315bebdbb2b761aa729fcda77aa16d38c5f2"} Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.599100 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.600929 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.601030 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl" event={"ID":"79f84931-160b-409c-bb0b-193fd8988158","Type":"ContainerDied","Data":"6f38ddea16f6a575ff6b02ee452a356a2b9c4a9969cecb951b08522f186610aa"} Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.626250 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f38ddea16f6a575ff6b02ee452a356a2b9c4a9969cecb951b08522f186610aa" Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.635410 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.635269429 podStartE2EDuration="2.635269429s" podCreationTimestamp="2026-01-28 15:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:25.613647101 +0000 UTC m=+1983.387262129" watchObservedRunningTime="2026-01-28 15:34:25.635269429 +0000 UTC m=+1983.408884457" Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.779727 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.780302 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="fbd6cad2-10dc-443a-b1eb-1c537d618188" containerName="nova-kuttl-api-log" containerID="cri-o://eaab9986ccc76aa74f7b4c96f80e04762122fae3005c7102dad90b78f38ad1bf" gracePeriod=30 Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.780441 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="fbd6cad2-10dc-443a-b1eb-1c537d618188" containerName="nova-kuttl-api-api" containerID="cri-o://df2d9ffa2b19e393be0906a8dc2f6ad7934dfebad271a2b3af36919a34d67118" gracePeriod=30 Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.805052 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.805353 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="91e794c5-4ed4-4d5b-9698-a0b1fea08552" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://d8ef1f2a674f89985432d61e520009c530917236e755060868a20e3aaac72496" gracePeriod=30 Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.882581 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.912324 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.935999 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.936578 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="d27a2427-c1e3-44ef-85f2-42fa4d95aa73" containerName="nova-kuttl-metadata-log" containerID="cri-o://67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588" gracePeriod=30 Jan 28 15:34:25 crc kubenswrapper[4893]: I0128 15:34:25.936651 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="d27a2427-c1e3-44ef-85f2-42fa4d95aa73" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738" gracePeriod=30 Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.515713 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.627384 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-logs\") pod \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\" (UID: \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\") " Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.627509 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4244\" (UniqueName: \"kubernetes.io/projected/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-kube-api-access-w4244\") pod \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\" (UID: \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\") " Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.627641 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-config-data\") pod \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\" (UID: \"d27a2427-c1e3-44ef-85f2-42fa4d95aa73\") " Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.627913 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-logs" (OuterVolumeSpecName: "logs") pod "d27a2427-c1e3-44ef-85f2-42fa4d95aa73" (UID: "d27a2427-c1e3-44ef-85f2-42fa4d95aa73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.627997 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.633559 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-kube-api-access-w4244" (OuterVolumeSpecName: "kube-api-access-w4244") pod "d27a2427-c1e3-44ef-85f2-42fa4d95aa73" (UID: "d27a2427-c1e3-44ef-85f2-42fa4d95aa73"). InnerVolumeSpecName "kube-api-access-w4244". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.637743 4893 generic.go:334] "Generic (PLEG): container finished" podID="d27a2427-c1e3-44ef-85f2-42fa4d95aa73" containerID="45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738" exitCode=0 Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.637771 4893 generic.go:334] "Generic (PLEG): container finished" podID="d27a2427-c1e3-44ef-85f2-42fa4d95aa73" containerID="67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588" exitCode=143 Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.637806 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"d27a2427-c1e3-44ef-85f2-42fa4d95aa73","Type":"ContainerDied","Data":"45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738"} Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.637832 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"d27a2427-c1e3-44ef-85f2-42fa4d95aa73","Type":"ContainerDied","Data":"67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588"} Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.637841 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"d27a2427-c1e3-44ef-85f2-42fa4d95aa73","Type":"ContainerDied","Data":"330fe987b4b4884bf63e024341f79089e469b87deb40bf32e1f7bf72009be37c"} Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.637856 4893 scope.go:117] "RemoveContainer" containerID="45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.637969 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.641487 4893 generic.go:334] "Generic (PLEG): container finished" podID="fbd6cad2-10dc-443a-b1eb-1c537d618188" containerID="df2d9ffa2b19e393be0906a8dc2f6ad7934dfebad271a2b3af36919a34d67118" exitCode=0 Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.641521 4893 generic.go:334] "Generic (PLEG): container finished" podID="fbd6cad2-10dc-443a-b1eb-1c537d618188" containerID="eaab9986ccc76aa74f7b4c96f80e04762122fae3005c7102dad90b78f38ad1bf" exitCode=143 Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.642084 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fbd6cad2-10dc-443a-b1eb-1c537d618188","Type":"ContainerDied","Data":"df2d9ffa2b19e393be0906a8dc2f6ad7934dfebad271a2b3af36919a34d67118"} Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.642175 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fbd6cad2-10dc-443a-b1eb-1c537d618188","Type":"ContainerDied","Data":"eaab9986ccc76aa74f7b4c96f80e04762122fae3005c7102dad90b78f38ad1bf"} Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.653195 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.655587 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-config-data" (OuterVolumeSpecName: "config-data") pod "d27a2427-c1e3-44ef-85f2-42fa4d95aa73" (UID: "d27a2427-c1e3-44ef-85f2-42fa4d95aa73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.665012 4893 scope.go:117] "RemoveContainer" containerID="67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.686401 4893 scope.go:117] "RemoveContainer" containerID="45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738" Jan 28 15:34:26 crc kubenswrapper[4893]: E0128 15:34:26.686793 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738\": container with ID starting with 45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738 not found: ID does not exist" containerID="45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.686830 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738"} err="failed to get container status \"45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738\": rpc error: code = NotFound desc = could not find container \"45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738\": container with ID starting with 45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738 not found: ID does not exist" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.686852 4893 scope.go:117] "RemoveContainer" containerID="67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588" Jan 28 15:34:26 crc kubenswrapper[4893]: E0128 15:34:26.687171 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588\": container with ID starting with 67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588 not found: ID does not exist" containerID="67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.687194 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588"} err="failed to get container status \"67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588\": rpc error: code = NotFound desc = could not find container \"67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588\": container with ID starting with 67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588 not found: ID does not exist" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.687244 4893 scope.go:117] "RemoveContainer" containerID="45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.687616 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738"} err="failed to get container status \"45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738\": rpc error: code = NotFound desc = could not find container \"45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738\": container with ID starting with 45f8c140cc06d3e8ba2e4491f4b5eb77d4776569939a099ff80243d1bac86738 not found: ID does not exist" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.687634 4893 scope.go:117] "RemoveContainer" containerID="67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.687922 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588"} err="failed to get container status \"67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588\": rpc error: code = NotFound desc = could not find container \"67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588\": container with ID starting with 67d4ea67b412cbc248b8b3221ad52f71093a61b253fd8308134e0e8284de9588 not found: ID does not exist" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.729009 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4244\" (UniqueName: \"kubernetes.io/projected/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-kube-api-access-w4244\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:26 crc kubenswrapper[4893]: I0128 15:34:26.729038 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d27a2427-c1e3-44ef-85f2-42fa4d95aa73-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.009112 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.027553 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.038705 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.046753 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:27 crc kubenswrapper[4893]: E0128 15:34:27.047243 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd6cad2-10dc-443a-b1eb-1c537d618188" containerName="nova-kuttl-api-api" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.047266 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd6cad2-10dc-443a-b1eb-1c537d618188" containerName="nova-kuttl-api-api" Jan 28 15:34:27 crc kubenswrapper[4893]: E0128 15:34:27.047295 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f84931-160b-409c-bb0b-193fd8988158" containerName="nova-manage" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.047304 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f84931-160b-409c-bb0b-193fd8988158" containerName="nova-manage" Jan 28 15:34:27 crc kubenswrapper[4893]: E0128 15:34:27.047315 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd6cad2-10dc-443a-b1eb-1c537d618188" containerName="nova-kuttl-api-log" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.047323 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd6cad2-10dc-443a-b1eb-1c537d618188" containerName="nova-kuttl-api-log" Jan 28 15:34:27 crc kubenswrapper[4893]: E0128 15:34:27.047340 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27a2427-c1e3-44ef-85f2-42fa4d95aa73" containerName="nova-kuttl-metadata-log" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.047348 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27a2427-c1e3-44ef-85f2-42fa4d95aa73" containerName="nova-kuttl-metadata-log" Jan 28 15:34:27 crc kubenswrapper[4893]: E0128 15:34:27.047361 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27a2427-c1e3-44ef-85f2-42fa4d95aa73" containerName="nova-kuttl-metadata-metadata" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.047368 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27a2427-c1e3-44ef-85f2-42fa4d95aa73" containerName="nova-kuttl-metadata-metadata" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.047561 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27a2427-c1e3-44ef-85f2-42fa4d95aa73" containerName="nova-kuttl-metadata-metadata" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.047575 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd6cad2-10dc-443a-b1eb-1c537d618188" containerName="nova-kuttl-api-log" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.047590 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f84931-160b-409c-bb0b-193fd8988158" containerName="nova-manage" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.047605 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27a2427-c1e3-44ef-85f2-42fa4d95aa73" containerName="nova-kuttl-metadata-log" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.047615 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd6cad2-10dc-443a-b1eb-1c537d618188" containerName="nova-kuttl-api-api" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.048709 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.052751 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.064313 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.133959 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pnjm\" (UniqueName: \"kubernetes.io/projected/fbd6cad2-10dc-443a-b1eb-1c537d618188-kube-api-access-2pnjm\") pod \"fbd6cad2-10dc-443a-b1eb-1c537d618188\" (UID: \"fbd6cad2-10dc-443a-b1eb-1c537d618188\") " Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.134278 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbd6cad2-10dc-443a-b1eb-1c537d618188-logs\") pod \"fbd6cad2-10dc-443a-b1eb-1c537d618188\" (UID: \"fbd6cad2-10dc-443a-b1eb-1c537d618188\") " Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.134380 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd6cad2-10dc-443a-b1eb-1c537d618188-config-data\") pod \"fbd6cad2-10dc-443a-b1eb-1c537d618188\" (UID: \"fbd6cad2-10dc-443a-b1eb-1c537d618188\") " Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.134849 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f23fe0-8b86-4d77-b796-5016503b4ffa-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"97f23fe0-8b86-4d77-b796-5016503b4ffa\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.134981 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbd6cad2-10dc-443a-b1eb-1c537d618188-logs" (OuterVolumeSpecName: "logs") pod "fbd6cad2-10dc-443a-b1eb-1c537d618188" (UID: "fbd6cad2-10dc-443a-b1eb-1c537d618188"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.134998 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8lhh\" (UniqueName: \"kubernetes.io/projected/97f23fe0-8b86-4d77-b796-5016503b4ffa-kube-api-access-n8lhh\") pod \"nova-kuttl-metadata-0\" (UID: \"97f23fe0-8b86-4d77-b796-5016503b4ffa\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.135222 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f23fe0-8b86-4d77-b796-5016503b4ffa-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"97f23fe0-8b86-4d77-b796-5016503b4ffa\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.135377 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbd6cad2-10dc-443a-b1eb-1c537d618188-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.137645 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbd6cad2-10dc-443a-b1eb-1c537d618188-kube-api-access-2pnjm" (OuterVolumeSpecName: "kube-api-access-2pnjm") pod "fbd6cad2-10dc-443a-b1eb-1c537d618188" (UID: "fbd6cad2-10dc-443a-b1eb-1c537d618188"). InnerVolumeSpecName "kube-api-access-2pnjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.159238 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbd6cad2-10dc-443a-b1eb-1c537d618188-config-data" (OuterVolumeSpecName: "config-data") pod "fbd6cad2-10dc-443a-b1eb-1c537d618188" (UID: "fbd6cad2-10dc-443a-b1eb-1c537d618188"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.237257 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8lhh\" (UniqueName: \"kubernetes.io/projected/97f23fe0-8b86-4d77-b796-5016503b4ffa-kube-api-access-n8lhh\") pod \"nova-kuttl-metadata-0\" (UID: \"97f23fe0-8b86-4d77-b796-5016503b4ffa\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.237367 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f23fe0-8b86-4d77-b796-5016503b4ffa-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"97f23fe0-8b86-4d77-b796-5016503b4ffa\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.237436 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f23fe0-8b86-4d77-b796-5016503b4ffa-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"97f23fe0-8b86-4d77-b796-5016503b4ffa\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.237511 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pnjm\" (UniqueName: \"kubernetes.io/projected/fbd6cad2-10dc-443a-b1eb-1c537d618188-kube-api-access-2pnjm\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.237524 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbd6cad2-10dc-443a-b1eb-1c537d618188-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.238414 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f23fe0-8b86-4d77-b796-5016503b4ffa-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"97f23fe0-8b86-4d77-b796-5016503b4ffa\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.241156 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f23fe0-8b86-4d77-b796-5016503b4ffa-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"97f23fe0-8b86-4d77-b796-5016503b4ffa\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.256030 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8lhh\" (UniqueName: \"kubernetes.io/projected/97f23fe0-8b86-4d77-b796-5016503b4ffa-kube-api-access-n8lhh\") pod \"nova-kuttl-metadata-0\" (UID: \"97f23fe0-8b86-4d77-b796-5016503b4ffa\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.385384 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.653163 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fbd6cad2-10dc-443a-b1eb-1c537d618188","Type":"ContainerDied","Data":"cc8844220f8ba20e9365bfd1cff60fa24532e770880bcd9dc83c7cc26e4f0872"} Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.653527 4893 scope.go:117] "RemoveContainer" containerID="df2d9ffa2b19e393be0906a8dc2f6ad7934dfebad271a2b3af36919a34d67118" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.653205 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.683785 4893 scope.go:117] "RemoveContainer" containerID="eaab9986ccc76aa74f7b4c96f80e04762122fae3005c7102dad90b78f38ad1bf" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.699003 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.719101 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.730859 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.732837 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.736262 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.740040 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.847246 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb4d340d-b990-48c6-a31b-ab334c760096-logs\") pod \"nova-kuttl-api-0\" (UID: \"eb4d340d-b990-48c6-a31b-ab334c760096\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.847330 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm8mx\" (UniqueName: \"kubernetes.io/projected/eb4d340d-b990-48c6-a31b-ab334c760096-kube-api-access-cm8mx\") pod \"nova-kuttl-api-0\" (UID: \"eb4d340d-b990-48c6-a31b-ab334c760096\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.847359 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb4d340d-b990-48c6-a31b-ab334c760096-config-data\") pod \"nova-kuttl-api-0\" (UID: \"eb4d340d-b990-48c6-a31b-ab334c760096\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.883234 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.949648 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb4d340d-b990-48c6-a31b-ab334c760096-logs\") pod \"nova-kuttl-api-0\" (UID: \"eb4d340d-b990-48c6-a31b-ab334c760096\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.949843 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm8mx\" (UniqueName: \"kubernetes.io/projected/eb4d340d-b990-48c6-a31b-ab334c760096-kube-api-access-cm8mx\") pod \"nova-kuttl-api-0\" (UID: \"eb4d340d-b990-48c6-a31b-ab334c760096\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.949870 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb4d340d-b990-48c6-a31b-ab334c760096-config-data\") pod \"nova-kuttl-api-0\" (UID: \"eb4d340d-b990-48c6-a31b-ab334c760096\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.950385 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb4d340d-b990-48c6-a31b-ab334c760096-logs\") pod \"nova-kuttl-api-0\" (UID: \"eb4d340d-b990-48c6-a31b-ab334c760096\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.955897 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb4d340d-b990-48c6-a31b-ab334c760096-config-data\") pod \"nova-kuttl-api-0\" (UID: \"eb4d340d-b990-48c6-a31b-ab334c760096\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:27 crc kubenswrapper[4893]: I0128 15:34:27.966111 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm8mx\" (UniqueName: \"kubernetes.io/projected/eb4d340d-b990-48c6-a31b-ab334c760096-kube-api-access-cm8mx\") pod \"nova-kuttl-api-0\" (UID: \"eb4d340d-b990-48c6-a31b-ab334c760096\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:28 crc kubenswrapper[4893]: I0128 15:34:28.048252 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:28 crc kubenswrapper[4893]: I0128 15:34:28.485221 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:28 crc kubenswrapper[4893]: W0128 15:34:28.499525 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb4d340d_b990_48c6_a31b_ab334c760096.slice/crio-b391240ff36360b99fbcc11333071f73be1a875ee699df365698a0d674cce167 WatchSource:0}: Error finding container b391240ff36360b99fbcc11333071f73be1a875ee699df365698a0d674cce167: Status 404 returned error can't find the container with id b391240ff36360b99fbcc11333071f73be1a875ee699df365698a0d674cce167 Jan 28 15:34:28 crc kubenswrapper[4893]: I0128 15:34:28.666100 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"eb4d340d-b990-48c6-a31b-ab334c760096","Type":"ContainerStarted","Data":"b391240ff36360b99fbcc11333071f73be1a875ee699df365698a0d674cce167"} Jan 28 15:34:28 crc kubenswrapper[4893]: I0128 15:34:28.668750 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"97f23fe0-8b86-4d77-b796-5016503b4ffa","Type":"ContainerStarted","Data":"4bada11841fc4e03476ff622b988adbc99dda9709edc84756e0d0e4378d32c94"} Jan 28 15:34:28 crc kubenswrapper[4893]: I0128 15:34:28.668942 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"97f23fe0-8b86-4d77-b796-5016503b4ffa","Type":"ContainerStarted","Data":"27545413a8c6b79cabbb2e91c07917d6e9f4da198a70f68721ab43b93f0faabf"} Jan 28 15:34:28 crc kubenswrapper[4893]: I0128 15:34:28.669021 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"97f23fe0-8b86-4d77-b796-5016503b4ffa","Type":"ContainerStarted","Data":"409c83c63c0986a3c6d1a667bd243833332eb1408c8782b09032abc10a408991"} Jan 28 15:34:28 crc kubenswrapper[4893]: I0128 15:34:28.698050 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=1.698034379 podStartE2EDuration="1.698034379s" podCreationTimestamp="2026-01-28 15:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:28.693993768 +0000 UTC m=+1986.467608796" watchObservedRunningTime="2026-01-28 15:34:28.698034379 +0000 UTC m=+1986.471649407" Jan 28 15:34:28 crc kubenswrapper[4893]: I0128 15:34:28.901643 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d27a2427-c1e3-44ef-85f2-42fa4d95aa73" path="/var/lib/kubelet/pods/d27a2427-c1e3-44ef-85f2-42fa4d95aa73/volumes" Jan 28 15:34:28 crc kubenswrapper[4893]: I0128 15:34:28.902354 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbd6cad2-10dc-443a-b1eb-1c537d618188" path="/var/lib/kubelet/pods/fbd6cad2-10dc-443a-b1eb-1c537d618188/volumes" Jan 28 15:34:29 crc kubenswrapper[4893]: I0128 15:34:29.699926 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"91e794c5-4ed4-4d5b-9698-a0b1fea08552","Type":"ContainerDied","Data":"d8ef1f2a674f89985432d61e520009c530917236e755060868a20e3aaac72496"} Jan 28 15:34:29 crc kubenswrapper[4893]: I0128 15:34:29.699928 4893 generic.go:334] "Generic (PLEG): container finished" podID="91e794c5-4ed4-4d5b-9698-a0b1fea08552" containerID="d8ef1f2a674f89985432d61e520009c530917236e755060868a20e3aaac72496" exitCode=0 Jan 28 15:34:29 crc kubenswrapper[4893]: I0128 15:34:29.702825 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"eb4d340d-b990-48c6-a31b-ab334c760096","Type":"ContainerStarted","Data":"9d71a878bf75bbda6490f13dcbd8c9a03f78afb33f487f0ba70abe43612ce7bf"} Jan 28 15:34:29 crc kubenswrapper[4893]: I0128 15:34:29.702862 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"eb4d340d-b990-48c6-a31b-ab334c760096","Type":"ContainerStarted","Data":"5f178a4da6157c6faa2ddbf8686b74431a4c49fb98e56e8b070bb9f5384b0838"} Jan 28 15:34:29 crc kubenswrapper[4893]: I0128 15:34:29.724206 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.724187027 podStartE2EDuration="2.724187027s" podCreationTimestamp="2026-01-28 15:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:29.721265008 +0000 UTC m=+1987.494880046" watchObservedRunningTime="2026-01-28 15:34:29.724187027 +0000 UTC m=+1987.497802055" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.111485 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.209601 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e794c5-4ed4-4d5b-9698-a0b1fea08552-config-data\") pod \"91e794c5-4ed4-4d5b-9698-a0b1fea08552\" (UID: \"91e794c5-4ed4-4d5b-9698-a0b1fea08552\") " Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.210464 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5kz8\" (UniqueName: \"kubernetes.io/projected/91e794c5-4ed4-4d5b-9698-a0b1fea08552-kube-api-access-q5kz8\") pod \"91e794c5-4ed4-4d5b-9698-a0b1fea08552\" (UID: \"91e794c5-4ed4-4d5b-9698-a0b1fea08552\") " Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.215101 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e794c5-4ed4-4d5b-9698-a0b1fea08552-kube-api-access-q5kz8" (OuterVolumeSpecName: "kube-api-access-q5kz8") pod "91e794c5-4ed4-4d5b-9698-a0b1fea08552" (UID: "91e794c5-4ed4-4d5b-9698-a0b1fea08552"). InnerVolumeSpecName "kube-api-access-q5kz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.239013 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e794c5-4ed4-4d5b-9698-a0b1fea08552-config-data" (OuterVolumeSpecName: "config-data") pod "91e794c5-4ed4-4d5b-9698-a0b1fea08552" (UID: "91e794c5-4ed4-4d5b-9698-a0b1fea08552"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.312161 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5kz8\" (UniqueName: \"kubernetes.io/projected/91e794c5-4ed4-4d5b-9698-a0b1fea08552-kube-api-access-q5kz8\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.312206 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91e794c5-4ed4-4d5b-9698-a0b1fea08552-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.713154 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"91e794c5-4ed4-4d5b-9698-a0b1fea08552","Type":"ContainerDied","Data":"f5421ddc45c93ff961e7c5b69a79ec5fc5ef81918a877cd570ec41a7f57fa1fe"} Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.713215 4893 scope.go:117] "RemoveContainer" containerID="d8ef1f2a674f89985432d61e520009c530917236e755060868a20e3aaac72496" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.713220 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.748237 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.756382 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.767310 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:30 crc kubenswrapper[4893]: E0128 15:34:30.767690 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e794c5-4ed4-4d5b-9698-a0b1fea08552" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.767709 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e794c5-4ed4-4d5b-9698-a0b1fea08552" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.767887 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e794c5-4ed4-4d5b-9698-a0b1fea08552" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.768449 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.771139 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.782162 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.901336 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91e794c5-4ed4-4d5b-9698-a0b1fea08552" path="/var/lib/kubelet/pods/91e794c5-4ed4-4d5b-9698-a0b1fea08552/volumes" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.923797 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6tph\" (UniqueName: \"kubernetes.io/projected/ac4cba31-7ca0-4de5-9eef-f1ea959c1823-kube-api-access-g6tph\") pod \"nova-kuttl-scheduler-0\" (UID: \"ac4cba31-7ca0-4de5-9eef-f1ea959c1823\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:30 crc kubenswrapper[4893]: I0128 15:34:30.923889 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac4cba31-7ca0-4de5-9eef-f1ea959c1823-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"ac4cba31-7ca0-4de5-9eef-f1ea959c1823\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:31 crc kubenswrapper[4893]: I0128 15:34:31.025519 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6tph\" (UniqueName: \"kubernetes.io/projected/ac4cba31-7ca0-4de5-9eef-f1ea959c1823-kube-api-access-g6tph\") pod \"nova-kuttl-scheduler-0\" (UID: \"ac4cba31-7ca0-4de5-9eef-f1ea959c1823\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:31 crc kubenswrapper[4893]: I0128 15:34:31.025640 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac4cba31-7ca0-4de5-9eef-f1ea959c1823-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"ac4cba31-7ca0-4de5-9eef-f1ea959c1823\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:31 crc kubenswrapper[4893]: I0128 15:34:31.029004 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac4cba31-7ca0-4de5-9eef-f1ea959c1823-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"ac4cba31-7ca0-4de5-9eef-f1ea959c1823\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:31 crc kubenswrapper[4893]: I0128 15:34:31.046022 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6tph\" (UniqueName: \"kubernetes.io/projected/ac4cba31-7ca0-4de5-9eef-f1ea959c1823-kube-api-access-g6tph\") pod \"nova-kuttl-scheduler-0\" (UID: \"ac4cba31-7ca0-4de5-9eef-f1ea959c1823\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:31 crc kubenswrapper[4893]: I0128 15:34:31.095252 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:31 crc kubenswrapper[4893]: I0128 15:34:31.548902 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:31 crc kubenswrapper[4893]: I0128 15:34:31.734146 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"ac4cba31-7ca0-4de5-9eef-f1ea959c1823","Type":"ContainerStarted","Data":"fdc5cd2f9e6a497e9a57da61076632656ee1cd6cf2eb99ef70861b313d898f30"} Jan 28 15:34:32 crc kubenswrapper[4893]: I0128 15:34:32.385894 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:32 crc kubenswrapper[4893]: I0128 15:34:32.386462 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:32 crc kubenswrapper[4893]: I0128 15:34:32.743573 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"ac4cba31-7ca0-4de5-9eef-f1ea959c1823","Type":"ContainerStarted","Data":"16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08"} Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.003066 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.026924 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=4.025087546 podStartE2EDuration="4.025087546s" podCreationTimestamp="2026-01-28 15:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:32.766377807 +0000 UTC m=+1990.539992835" watchObservedRunningTime="2026-01-28 15:34:34.025087546 +0000 UTC m=+1991.798702574" Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.451065 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn"] Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.452137 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.454048 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.454133 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.467081 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn"] Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.591978 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f42pf\" (UniqueName: \"kubernetes.io/projected/2b554e78-6b57-406d-8a05-0e2931db92b7-kube-api-access-f42pf\") pod \"nova-kuttl-cell1-cell-mapping-nbnnn\" (UID: \"2b554e78-6b57-406d-8a05-0e2931db92b7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.592087 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b554e78-6b57-406d-8a05-0e2931db92b7-scripts\") pod \"nova-kuttl-cell1-cell-mapping-nbnnn\" (UID: \"2b554e78-6b57-406d-8a05-0e2931db92b7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.592130 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b554e78-6b57-406d-8a05-0e2931db92b7-config-data\") pod \"nova-kuttl-cell1-cell-mapping-nbnnn\" (UID: \"2b554e78-6b57-406d-8a05-0e2931db92b7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.693564 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42pf\" (UniqueName: \"kubernetes.io/projected/2b554e78-6b57-406d-8a05-0e2931db92b7-kube-api-access-f42pf\") pod \"nova-kuttl-cell1-cell-mapping-nbnnn\" (UID: \"2b554e78-6b57-406d-8a05-0e2931db92b7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.693646 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b554e78-6b57-406d-8a05-0e2931db92b7-scripts\") pod \"nova-kuttl-cell1-cell-mapping-nbnnn\" (UID: \"2b554e78-6b57-406d-8a05-0e2931db92b7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.693694 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b554e78-6b57-406d-8a05-0e2931db92b7-config-data\") pod \"nova-kuttl-cell1-cell-mapping-nbnnn\" (UID: \"2b554e78-6b57-406d-8a05-0e2931db92b7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.700189 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b554e78-6b57-406d-8a05-0e2931db92b7-scripts\") pod \"nova-kuttl-cell1-cell-mapping-nbnnn\" (UID: \"2b554e78-6b57-406d-8a05-0e2931db92b7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.701573 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b554e78-6b57-406d-8a05-0e2931db92b7-config-data\") pod \"nova-kuttl-cell1-cell-mapping-nbnnn\" (UID: \"2b554e78-6b57-406d-8a05-0e2931db92b7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.709452 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f42pf\" (UniqueName: \"kubernetes.io/projected/2b554e78-6b57-406d-8a05-0e2931db92b7-kube-api-access-f42pf\") pod \"nova-kuttl-cell1-cell-mapping-nbnnn\" (UID: \"2b554e78-6b57-406d-8a05-0e2931db92b7\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" Jan 28 15:34:34 crc kubenswrapper[4893]: I0128 15:34:34.770614 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" Jan 28 15:34:35 crc kubenswrapper[4893]: I0128 15:34:35.334112 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn"] Jan 28 15:34:35 crc kubenswrapper[4893]: W0128 15:34:35.337992 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b554e78_6b57_406d_8a05_0e2931db92b7.slice/crio-f959b02114b32d7e1886adc7b6f1ecb05729aaf13439170a5acda469cdf1edf4 WatchSource:0}: Error finding container f959b02114b32d7e1886adc7b6f1ecb05729aaf13439170a5acda469cdf1edf4: Status 404 returned error can't find the container with id f959b02114b32d7e1886adc7b6f1ecb05729aaf13439170a5acda469cdf1edf4 Jan 28 15:34:35 crc kubenswrapper[4893]: I0128 15:34:35.793084 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" event={"ID":"2b554e78-6b57-406d-8a05-0e2931db92b7","Type":"ContainerStarted","Data":"8f7c133abbe6fdcc809602427be1caa77a4ff32912b2aec60602b480e91b2f76"} Jan 28 15:34:35 crc kubenswrapper[4893]: I0128 15:34:35.793133 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" event={"ID":"2b554e78-6b57-406d-8a05-0e2931db92b7","Type":"ContainerStarted","Data":"f959b02114b32d7e1886adc7b6f1ecb05729aaf13439170a5acda469cdf1edf4"} Jan 28 15:34:35 crc kubenswrapper[4893]: I0128 15:34:35.818298 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" podStartSLOduration=1.818275262 podStartE2EDuration="1.818275262s" podCreationTimestamp="2026-01-28 15:34:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:35.809463822 +0000 UTC m=+1993.583078860" watchObservedRunningTime="2026-01-28 15:34:35.818275262 +0000 UTC m=+1993.591890290" Jan 28 15:34:36 crc kubenswrapper[4893]: I0128 15:34:36.096443 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:37 crc kubenswrapper[4893]: I0128 15:34:37.385826 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:37 crc kubenswrapper[4893]: I0128 15:34:37.386118 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:38 crc kubenswrapper[4893]: I0128 15:34:38.048855 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:38 crc kubenswrapper[4893]: I0128 15:34:38.049123 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:38 crc kubenswrapper[4893]: I0128 15:34:38.468705 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="97f23fe0-8b86-4d77-b796-5016503b4ffa" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.226:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:34:38 crc kubenswrapper[4893]: I0128 15:34:38.469380 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="97f23fe0-8b86-4d77-b796-5016503b4ffa" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.226:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:34:39 crc kubenswrapper[4893]: I0128 15:34:39.130724 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="eb4d340d-b990-48c6-a31b-ab334c760096" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.227:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:34:39 crc kubenswrapper[4893]: I0128 15:34:39.130747 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="eb4d340d-b990-48c6-a31b-ab334c760096" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.227:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:34:41 crc kubenswrapper[4893]: I0128 15:34:41.095782 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:41 crc kubenswrapper[4893]: I0128 15:34:41.122649 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:41 crc kubenswrapper[4893]: I0128 15:34:41.845907 4893 generic.go:334] "Generic (PLEG): container finished" podID="2b554e78-6b57-406d-8a05-0e2931db92b7" containerID="8f7c133abbe6fdcc809602427be1caa77a4ff32912b2aec60602b480e91b2f76" exitCode=0 Jan 28 15:34:41 crc kubenswrapper[4893]: I0128 15:34:41.845982 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" event={"ID":"2b554e78-6b57-406d-8a05-0e2931db92b7","Type":"ContainerDied","Data":"8f7c133abbe6fdcc809602427be1caa77a4ff32912b2aec60602b480e91b2f76"} Jan 28 15:34:41 crc kubenswrapper[4893]: I0128 15:34:41.903274 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:43 crc kubenswrapper[4893]: I0128 15:34:43.242525 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" Jan 28 15:34:43 crc kubenswrapper[4893]: I0128 15:34:43.364123 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b554e78-6b57-406d-8a05-0e2931db92b7-config-data\") pod \"2b554e78-6b57-406d-8a05-0e2931db92b7\" (UID: \"2b554e78-6b57-406d-8a05-0e2931db92b7\") " Jan 28 15:34:43 crc kubenswrapper[4893]: I0128 15:34:43.364470 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f42pf\" (UniqueName: \"kubernetes.io/projected/2b554e78-6b57-406d-8a05-0e2931db92b7-kube-api-access-f42pf\") pod \"2b554e78-6b57-406d-8a05-0e2931db92b7\" (UID: \"2b554e78-6b57-406d-8a05-0e2931db92b7\") " Jan 28 15:34:43 crc kubenswrapper[4893]: I0128 15:34:43.364631 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b554e78-6b57-406d-8a05-0e2931db92b7-scripts\") pod \"2b554e78-6b57-406d-8a05-0e2931db92b7\" (UID: \"2b554e78-6b57-406d-8a05-0e2931db92b7\") " Jan 28 15:34:43 crc kubenswrapper[4893]: I0128 15:34:43.369495 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b554e78-6b57-406d-8a05-0e2931db92b7-scripts" (OuterVolumeSpecName: "scripts") pod "2b554e78-6b57-406d-8a05-0e2931db92b7" (UID: "2b554e78-6b57-406d-8a05-0e2931db92b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:43 crc kubenswrapper[4893]: I0128 15:34:43.369605 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b554e78-6b57-406d-8a05-0e2931db92b7-kube-api-access-f42pf" (OuterVolumeSpecName: "kube-api-access-f42pf") pod "2b554e78-6b57-406d-8a05-0e2931db92b7" (UID: "2b554e78-6b57-406d-8a05-0e2931db92b7"). InnerVolumeSpecName "kube-api-access-f42pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:43 crc kubenswrapper[4893]: I0128 15:34:43.389979 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b554e78-6b57-406d-8a05-0e2931db92b7-config-data" (OuterVolumeSpecName: "config-data") pod "2b554e78-6b57-406d-8a05-0e2931db92b7" (UID: "2b554e78-6b57-406d-8a05-0e2931db92b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:43 crc kubenswrapper[4893]: I0128 15:34:43.465905 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b554e78-6b57-406d-8a05-0e2931db92b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:43 crc kubenswrapper[4893]: I0128 15:34:43.465961 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f42pf\" (UniqueName: \"kubernetes.io/projected/2b554e78-6b57-406d-8a05-0e2931db92b7-kube-api-access-f42pf\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:43 crc kubenswrapper[4893]: I0128 15:34:43.465978 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b554e78-6b57-406d-8a05-0e2931db92b7-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:43 crc kubenswrapper[4893]: I0128 15:34:43.861530 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" event={"ID":"2b554e78-6b57-406d-8a05-0e2931db92b7","Type":"ContainerDied","Data":"f959b02114b32d7e1886adc7b6f1ecb05729aaf13439170a5acda469cdf1edf4"} Jan 28 15:34:43 crc kubenswrapper[4893]: I0128 15:34:43.861567 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f959b02114b32d7e1886adc7b6f1ecb05729aaf13439170a5acda469cdf1edf4" Jan 28 15:34:43 crc kubenswrapper[4893]: I0128 15:34:43.861603 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn" Jan 28 15:34:44 crc kubenswrapper[4893]: I0128 15:34:44.091791 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:44 crc kubenswrapper[4893]: I0128 15:34:44.092349 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="eb4d340d-b990-48c6-a31b-ab334c760096" containerName="nova-kuttl-api-log" containerID="cri-o://5f178a4da6157c6faa2ddbf8686b74431a4c49fb98e56e8b070bb9f5384b0838" gracePeriod=30 Jan 28 15:34:44 crc kubenswrapper[4893]: I0128 15:34:44.092525 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="eb4d340d-b990-48c6-a31b-ab334c760096" containerName="nova-kuttl-api-api" containerID="cri-o://9d71a878bf75bbda6490f13dcbd8c9a03f78afb33f487f0ba70abe43612ce7bf" gracePeriod=30 Jan 28 15:34:44 crc kubenswrapper[4893]: I0128 15:34:44.115318 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:44 crc kubenswrapper[4893]: I0128 15:34:44.115607 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="ac4cba31-7ca0-4de5-9eef-f1ea959c1823" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08" gracePeriod=30 Jan 28 15:34:44 crc kubenswrapper[4893]: I0128 15:34:44.145397 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:44 crc kubenswrapper[4893]: I0128 15:34:44.145633 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="97f23fe0-8b86-4d77-b796-5016503b4ffa" containerName="nova-kuttl-metadata-log" containerID="cri-o://27545413a8c6b79cabbb2e91c07917d6e9f4da198a70f68721ab43b93f0faabf" gracePeriod=30 Jan 28 15:34:44 crc kubenswrapper[4893]: I0128 15:34:44.145769 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="97f23fe0-8b86-4d77-b796-5016503b4ffa" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://4bada11841fc4e03476ff622b988adbc99dda9709edc84756e0d0e4378d32c94" gracePeriod=30 Jan 28 15:34:44 crc kubenswrapper[4893]: I0128 15:34:44.882391 4893 generic.go:334] "Generic (PLEG): container finished" podID="eb4d340d-b990-48c6-a31b-ab334c760096" containerID="5f178a4da6157c6faa2ddbf8686b74431a4c49fb98e56e8b070bb9f5384b0838" exitCode=143 Jan 28 15:34:44 crc kubenswrapper[4893]: I0128 15:34:44.882496 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"eb4d340d-b990-48c6-a31b-ab334c760096","Type":"ContainerDied","Data":"5f178a4da6157c6faa2ddbf8686b74431a4c49fb98e56e8b070bb9f5384b0838"} Jan 28 15:34:44 crc kubenswrapper[4893]: I0128 15:34:44.884381 4893 generic.go:334] "Generic (PLEG): container finished" podID="97f23fe0-8b86-4d77-b796-5016503b4ffa" containerID="27545413a8c6b79cabbb2e91c07917d6e9f4da198a70f68721ab43b93f0faabf" exitCode=143 Jan 28 15:34:44 crc kubenswrapper[4893]: I0128 15:34:44.884422 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"97f23fe0-8b86-4d77-b796-5016503b4ffa","Type":"ContainerDied","Data":"27545413a8c6b79cabbb2e91c07917d6e9f4da198a70f68721ab43b93f0faabf"} Jan 28 15:34:46 crc kubenswrapper[4893]: E0128 15:34:46.098177 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:34:46 crc kubenswrapper[4893]: E0128 15:34:46.099764 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:34:46 crc kubenswrapper[4893]: E0128 15:34:46.100975 4893 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:34:46 crc kubenswrapper[4893]: E0128 15:34:46.101024 4893 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="ac4cba31-7ca0-4de5-9eef-f1ea959c1823" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.780273 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.787021 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.840700 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8lhh\" (UniqueName: \"kubernetes.io/projected/97f23fe0-8b86-4d77-b796-5016503b4ffa-kube-api-access-n8lhh\") pod \"97f23fe0-8b86-4d77-b796-5016503b4ffa\" (UID: \"97f23fe0-8b86-4d77-b796-5016503b4ffa\") " Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.840881 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f23fe0-8b86-4d77-b796-5016503b4ffa-logs\") pod \"97f23fe0-8b86-4d77-b796-5016503b4ffa\" (UID: \"97f23fe0-8b86-4d77-b796-5016503b4ffa\") " Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.840913 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb4d340d-b990-48c6-a31b-ab334c760096-config-data\") pod \"eb4d340d-b990-48c6-a31b-ab334c760096\" (UID: \"eb4d340d-b990-48c6-a31b-ab334c760096\") " Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.840942 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f23fe0-8b86-4d77-b796-5016503b4ffa-config-data\") pod \"97f23fe0-8b86-4d77-b796-5016503b4ffa\" (UID: \"97f23fe0-8b86-4d77-b796-5016503b4ffa\") " Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.840967 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm8mx\" (UniqueName: \"kubernetes.io/projected/eb4d340d-b990-48c6-a31b-ab334c760096-kube-api-access-cm8mx\") pod \"eb4d340d-b990-48c6-a31b-ab334c760096\" (UID: \"eb4d340d-b990-48c6-a31b-ab334c760096\") " Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.840994 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb4d340d-b990-48c6-a31b-ab334c760096-logs\") pod \"eb4d340d-b990-48c6-a31b-ab334c760096\" (UID: \"eb4d340d-b990-48c6-a31b-ab334c760096\") " Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.842163 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb4d340d-b990-48c6-a31b-ab334c760096-logs" (OuterVolumeSpecName: "logs") pod "eb4d340d-b990-48c6-a31b-ab334c760096" (UID: "eb4d340d-b990-48c6-a31b-ab334c760096"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.842182 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97f23fe0-8b86-4d77-b796-5016503b4ffa-logs" (OuterVolumeSpecName: "logs") pod "97f23fe0-8b86-4d77-b796-5016503b4ffa" (UID: "97f23fe0-8b86-4d77-b796-5016503b4ffa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.848130 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f23fe0-8b86-4d77-b796-5016503b4ffa-kube-api-access-n8lhh" (OuterVolumeSpecName: "kube-api-access-n8lhh") pod "97f23fe0-8b86-4d77-b796-5016503b4ffa" (UID: "97f23fe0-8b86-4d77-b796-5016503b4ffa"). InnerVolumeSpecName "kube-api-access-n8lhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.849139 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb4d340d-b990-48c6-a31b-ab334c760096-kube-api-access-cm8mx" (OuterVolumeSpecName: "kube-api-access-cm8mx") pod "eb4d340d-b990-48c6-a31b-ab334c760096" (UID: "eb4d340d-b990-48c6-a31b-ab334c760096"). InnerVolumeSpecName "kube-api-access-cm8mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.870784 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb4d340d-b990-48c6-a31b-ab334c760096-config-data" (OuterVolumeSpecName: "config-data") pod "eb4d340d-b990-48c6-a31b-ab334c760096" (UID: "eb4d340d-b990-48c6-a31b-ab334c760096"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.873976 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f23fe0-8b86-4d77-b796-5016503b4ffa-config-data" (OuterVolumeSpecName: "config-data") pod "97f23fe0-8b86-4d77-b796-5016503b4ffa" (UID: "97f23fe0-8b86-4d77-b796-5016503b4ffa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.909175 4893 generic.go:334] "Generic (PLEG): container finished" podID="97f23fe0-8b86-4d77-b796-5016503b4ffa" containerID="4bada11841fc4e03476ff622b988adbc99dda9709edc84756e0d0e4378d32c94" exitCode=0 Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.909255 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"97f23fe0-8b86-4d77-b796-5016503b4ffa","Type":"ContainerDied","Data":"4bada11841fc4e03476ff622b988adbc99dda9709edc84756e0d0e4378d32c94"} Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.909308 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"97f23fe0-8b86-4d77-b796-5016503b4ffa","Type":"ContainerDied","Data":"409c83c63c0986a3c6d1a667bd243833332eb1408c8782b09032abc10a408991"} Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.909308 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.909361 4893 scope.go:117] "RemoveContainer" containerID="4bada11841fc4e03476ff622b988adbc99dda9709edc84756e0d0e4378d32c94" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.911090 4893 generic.go:334] "Generic (PLEG): container finished" podID="eb4d340d-b990-48c6-a31b-ab334c760096" containerID="9d71a878bf75bbda6490f13dcbd8c9a03f78afb33f487f0ba70abe43612ce7bf" exitCode=0 Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.911122 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"eb4d340d-b990-48c6-a31b-ab334c760096","Type":"ContainerDied","Data":"9d71a878bf75bbda6490f13dcbd8c9a03f78afb33f487f0ba70abe43612ce7bf"} Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.911139 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"eb4d340d-b990-48c6-a31b-ab334c760096","Type":"ContainerDied","Data":"b391240ff36360b99fbcc11333071f73be1a875ee699df365698a0d674cce167"} Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.911177 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.932677 4893 scope.go:117] "RemoveContainer" containerID="27545413a8c6b79cabbb2e91c07917d6e9f4da198a70f68721ab43b93f0faabf" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.944862 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f23fe0-8b86-4d77-b796-5016503b4ffa-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.944910 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb4d340d-b990-48c6-a31b-ab334c760096-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.944948 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f23fe0-8b86-4d77-b796-5016503b4ffa-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.944964 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm8mx\" (UniqueName: \"kubernetes.io/projected/eb4d340d-b990-48c6-a31b-ab334c760096-kube-api-access-cm8mx\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.944978 4893 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb4d340d-b990-48c6-a31b-ab334c760096-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.944992 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8lhh\" (UniqueName: \"kubernetes.io/projected/97f23fe0-8b86-4d77-b796-5016503b4ffa-kube-api-access-n8lhh\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.954045 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.969940 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.972676 4893 scope.go:117] "RemoveContainer" containerID="4bada11841fc4e03476ff622b988adbc99dda9709edc84756e0d0e4378d32c94" Jan 28 15:34:47 crc kubenswrapper[4893]: E0128 15:34:47.973969 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bada11841fc4e03476ff622b988adbc99dda9709edc84756e0d0e4378d32c94\": container with ID starting with 4bada11841fc4e03476ff622b988adbc99dda9709edc84756e0d0e4378d32c94 not found: ID does not exist" containerID="4bada11841fc4e03476ff622b988adbc99dda9709edc84756e0d0e4378d32c94" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.974002 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bada11841fc4e03476ff622b988adbc99dda9709edc84756e0d0e4378d32c94"} err="failed to get container status \"4bada11841fc4e03476ff622b988adbc99dda9709edc84756e0d0e4378d32c94\": rpc error: code = NotFound desc = could not find container \"4bada11841fc4e03476ff622b988adbc99dda9709edc84756e0d0e4378d32c94\": container with ID starting with 4bada11841fc4e03476ff622b988adbc99dda9709edc84756e0d0e4378d32c94 not found: ID does not exist" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.974024 4893 scope.go:117] "RemoveContainer" containerID="27545413a8c6b79cabbb2e91c07917d6e9f4da198a70f68721ab43b93f0faabf" Jan 28 15:34:47 crc kubenswrapper[4893]: E0128 15:34:47.976018 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27545413a8c6b79cabbb2e91c07917d6e9f4da198a70f68721ab43b93f0faabf\": container with ID starting with 27545413a8c6b79cabbb2e91c07917d6e9f4da198a70f68721ab43b93f0faabf not found: ID does not exist" containerID="27545413a8c6b79cabbb2e91c07917d6e9f4da198a70f68721ab43b93f0faabf" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.976054 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27545413a8c6b79cabbb2e91c07917d6e9f4da198a70f68721ab43b93f0faabf"} err="failed to get container status \"27545413a8c6b79cabbb2e91c07917d6e9f4da198a70f68721ab43b93f0faabf\": rpc error: code = NotFound desc = could not find container \"27545413a8c6b79cabbb2e91c07917d6e9f4da198a70f68721ab43b93f0faabf\": container with ID starting with 27545413a8c6b79cabbb2e91c07917d6e9f4da198a70f68721ab43b93f0faabf not found: ID does not exist" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.976073 4893 scope.go:117] "RemoveContainer" containerID="9d71a878bf75bbda6490f13dcbd8c9a03f78afb33f487f0ba70abe43612ce7bf" Jan 28 15:34:47 crc kubenswrapper[4893]: I0128 15:34:47.983132 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.008950 4893 scope.go:117] "RemoveContainer" containerID="5f178a4da6157c6faa2ddbf8686b74431a4c49fb98e56e8b070bb9f5384b0838" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.017012 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.025171 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:48 crc kubenswrapper[4893]: E0128 15:34:48.025883 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b554e78-6b57-406d-8a05-0e2931db92b7" containerName="nova-manage" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.025975 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b554e78-6b57-406d-8a05-0e2931db92b7" containerName="nova-manage" Jan 28 15:34:48 crc kubenswrapper[4893]: E0128 15:34:48.026044 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4d340d-b990-48c6-a31b-ab334c760096" containerName="nova-kuttl-api-api" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.026096 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4d340d-b990-48c6-a31b-ab334c760096" containerName="nova-kuttl-api-api" Jan 28 15:34:48 crc kubenswrapper[4893]: E0128 15:34:48.026151 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f23fe0-8b86-4d77-b796-5016503b4ffa" containerName="nova-kuttl-metadata-metadata" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.026227 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f23fe0-8b86-4d77-b796-5016503b4ffa" containerName="nova-kuttl-metadata-metadata" Jan 28 15:34:48 crc kubenswrapper[4893]: E0128 15:34:48.026559 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f23fe0-8b86-4d77-b796-5016503b4ffa" containerName="nova-kuttl-metadata-log" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.026698 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f23fe0-8b86-4d77-b796-5016503b4ffa" containerName="nova-kuttl-metadata-log" Jan 28 15:34:48 crc kubenswrapper[4893]: E0128 15:34:48.026810 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4d340d-b990-48c6-a31b-ab334c760096" containerName="nova-kuttl-api-log" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.026919 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4d340d-b990-48c6-a31b-ab334c760096" containerName="nova-kuttl-api-log" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.027245 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb4d340d-b990-48c6-a31b-ab334c760096" containerName="nova-kuttl-api-log" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.027522 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f23fe0-8b86-4d77-b796-5016503b4ffa" containerName="nova-kuttl-metadata-metadata" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.027594 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f23fe0-8b86-4d77-b796-5016503b4ffa" containerName="nova-kuttl-metadata-log" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.027865 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb4d340d-b990-48c6-a31b-ab334c760096" containerName="nova-kuttl-api-api" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.027936 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b554e78-6b57-406d-8a05-0e2931db92b7" containerName="nova-manage" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.028852 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.030852 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.048314 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.050632 4893 scope.go:117] "RemoveContainer" containerID="9d71a878bf75bbda6490f13dcbd8c9a03f78afb33f487f0ba70abe43612ce7bf" Jan 28 15:34:48 crc kubenswrapper[4893]: E0128 15:34:48.051106 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d71a878bf75bbda6490f13dcbd8c9a03f78afb33f487f0ba70abe43612ce7bf\": container with ID starting with 9d71a878bf75bbda6490f13dcbd8c9a03f78afb33f487f0ba70abe43612ce7bf not found: ID does not exist" containerID="9d71a878bf75bbda6490f13dcbd8c9a03f78afb33f487f0ba70abe43612ce7bf" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.051168 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d71a878bf75bbda6490f13dcbd8c9a03f78afb33f487f0ba70abe43612ce7bf"} err="failed to get container status \"9d71a878bf75bbda6490f13dcbd8c9a03f78afb33f487f0ba70abe43612ce7bf\": rpc error: code = NotFound desc = could not find container \"9d71a878bf75bbda6490f13dcbd8c9a03f78afb33f487f0ba70abe43612ce7bf\": container with ID starting with 9d71a878bf75bbda6490f13dcbd8c9a03f78afb33f487f0ba70abe43612ce7bf not found: ID does not exist" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.051195 4893 scope.go:117] "RemoveContainer" containerID="5f178a4da6157c6faa2ddbf8686b74431a4c49fb98e56e8b070bb9f5384b0838" Jan 28 15:34:48 crc kubenswrapper[4893]: E0128 15:34:48.051531 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f178a4da6157c6faa2ddbf8686b74431a4c49fb98e56e8b070bb9f5384b0838\": container with ID starting with 5f178a4da6157c6faa2ddbf8686b74431a4c49fb98e56e8b070bb9f5384b0838 not found: ID does not exist" containerID="5f178a4da6157c6faa2ddbf8686b74431a4c49fb98e56e8b070bb9f5384b0838" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.051575 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f178a4da6157c6faa2ddbf8686b74431a4c49fb98e56e8b070bb9f5384b0838"} err="failed to get container status \"5f178a4da6157c6faa2ddbf8686b74431a4c49fb98e56e8b070bb9f5384b0838\": rpc error: code = NotFound desc = could not find container \"5f178a4da6157c6faa2ddbf8686b74431a4c49fb98e56e8b070bb9f5384b0838\": container with ID starting with 5f178a4da6157c6faa2ddbf8686b74431a4c49fb98e56e8b070bb9f5384b0838 not found: ID does not exist" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.060252 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.061896 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.064341 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.078271 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.147658 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ce2e6b-b04e-4d88-a01b-101d056e8137-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"28ce2e6b-b04e-4d88-a01b-101d056e8137\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.147728 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28ce2e6b-b04e-4d88-a01b-101d056e8137-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"28ce2e6b-b04e-4d88-a01b-101d056e8137\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.147777 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqmmr\" (UniqueName: \"kubernetes.io/projected/28ce2e6b-b04e-4d88-a01b-101d056e8137-kube-api-access-jqmmr\") pod \"nova-kuttl-metadata-0\" (UID: \"28ce2e6b-b04e-4d88-a01b-101d056e8137\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.249657 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/378ab36a-3a2c-4a6d-836f-92eba12307fe-logs\") pod \"nova-kuttl-api-0\" (UID: \"378ab36a-3a2c-4a6d-836f-92eba12307fe\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.249732 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/378ab36a-3a2c-4a6d-836f-92eba12307fe-config-data\") pod \"nova-kuttl-api-0\" (UID: \"378ab36a-3a2c-4a6d-836f-92eba12307fe\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.249797 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ce2e6b-b04e-4d88-a01b-101d056e8137-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"28ce2e6b-b04e-4d88-a01b-101d056e8137\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.249847 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28ce2e6b-b04e-4d88-a01b-101d056e8137-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"28ce2e6b-b04e-4d88-a01b-101d056e8137\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.249874 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdsp9\" (UniqueName: \"kubernetes.io/projected/378ab36a-3a2c-4a6d-836f-92eba12307fe-kube-api-access-sdsp9\") pod \"nova-kuttl-api-0\" (UID: \"378ab36a-3a2c-4a6d-836f-92eba12307fe\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.249918 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqmmr\" (UniqueName: \"kubernetes.io/projected/28ce2e6b-b04e-4d88-a01b-101d056e8137-kube-api-access-jqmmr\") pod \"nova-kuttl-metadata-0\" (UID: \"28ce2e6b-b04e-4d88-a01b-101d056e8137\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.250381 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28ce2e6b-b04e-4d88-a01b-101d056e8137-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"28ce2e6b-b04e-4d88-a01b-101d056e8137\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.255687 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ce2e6b-b04e-4d88-a01b-101d056e8137-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"28ce2e6b-b04e-4d88-a01b-101d056e8137\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.269939 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqmmr\" (UniqueName: \"kubernetes.io/projected/28ce2e6b-b04e-4d88-a01b-101d056e8137-kube-api-access-jqmmr\") pod \"nova-kuttl-metadata-0\" (UID: \"28ce2e6b-b04e-4d88-a01b-101d056e8137\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.351850 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/378ab36a-3a2c-4a6d-836f-92eba12307fe-logs\") pod \"nova-kuttl-api-0\" (UID: \"378ab36a-3a2c-4a6d-836f-92eba12307fe\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.351909 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/378ab36a-3a2c-4a6d-836f-92eba12307fe-config-data\") pod \"nova-kuttl-api-0\" (UID: \"378ab36a-3a2c-4a6d-836f-92eba12307fe\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.351970 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdsp9\" (UniqueName: \"kubernetes.io/projected/378ab36a-3a2c-4a6d-836f-92eba12307fe-kube-api-access-sdsp9\") pod \"nova-kuttl-api-0\" (UID: \"378ab36a-3a2c-4a6d-836f-92eba12307fe\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.352525 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/378ab36a-3a2c-4a6d-836f-92eba12307fe-logs\") pod \"nova-kuttl-api-0\" (UID: \"378ab36a-3a2c-4a6d-836f-92eba12307fe\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.353501 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.355762 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/378ab36a-3a2c-4a6d-836f-92eba12307fe-config-data\") pod \"nova-kuttl-api-0\" (UID: \"378ab36a-3a2c-4a6d-836f-92eba12307fe\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.369451 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdsp9\" (UniqueName: \"kubernetes.io/projected/378ab36a-3a2c-4a6d-836f-92eba12307fe-kube-api-access-sdsp9\") pod \"nova-kuttl-api-0\" (UID: \"378ab36a-3a2c-4a6d-836f-92eba12307fe\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.379599 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.575456 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.757740 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac4cba31-7ca0-4de5-9eef-f1ea959c1823-config-data\") pod \"ac4cba31-7ca0-4de5-9eef-f1ea959c1823\" (UID: \"ac4cba31-7ca0-4de5-9eef-f1ea959c1823\") " Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.757907 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6tph\" (UniqueName: \"kubernetes.io/projected/ac4cba31-7ca0-4de5-9eef-f1ea959c1823-kube-api-access-g6tph\") pod \"ac4cba31-7ca0-4de5-9eef-f1ea959c1823\" (UID: \"ac4cba31-7ca0-4de5-9eef-f1ea959c1823\") " Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.762345 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4cba31-7ca0-4de5-9eef-f1ea959c1823-kube-api-access-g6tph" (OuterVolumeSpecName: "kube-api-access-g6tph") pod "ac4cba31-7ca0-4de5-9eef-f1ea959c1823" (UID: "ac4cba31-7ca0-4de5-9eef-f1ea959c1823"). InnerVolumeSpecName "kube-api-access-g6tph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.780186 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac4cba31-7ca0-4de5-9eef-f1ea959c1823-config-data" (OuterVolumeSpecName: "config-data") pod "ac4cba31-7ca0-4de5-9eef-f1ea959c1823" (UID: "ac4cba31-7ca0-4de5-9eef-f1ea959c1823"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.849997 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.861150 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac4cba31-7ca0-4de5-9eef-f1ea959c1823-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.861192 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6tph\" (UniqueName: \"kubernetes.io/projected/ac4cba31-7ca0-4de5-9eef-f1ea959c1823-kube-api-access-g6tph\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.901800 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97f23fe0-8b86-4d77-b796-5016503b4ffa" path="/var/lib/kubelet/pods/97f23fe0-8b86-4d77-b796-5016503b4ffa/volumes" Jan 28 15:34:48 crc kubenswrapper[4893]: W0128 15:34:48.902443 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ce2e6b_b04e_4d88_a01b_101d056e8137.slice/crio-11e183412ce28d5677532a3a598f027922c2160f0f94d2ff8ba86135ec912898 WatchSource:0}: Error finding container 11e183412ce28d5677532a3a598f027922c2160f0f94d2ff8ba86135ec912898: Status 404 returned error can't find the container with id 11e183412ce28d5677532a3a598f027922c2160f0f94d2ff8ba86135ec912898 Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.902599 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb4d340d-b990-48c6-a31b-ab334c760096" path="/var/lib/kubelet/pods/eb4d340d-b990-48c6-a31b-ab334c760096/volumes" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.903350 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.926297 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"28ce2e6b-b04e-4d88-a01b-101d056e8137","Type":"ContainerStarted","Data":"11e183412ce28d5677532a3a598f027922c2160f0f94d2ff8ba86135ec912898"} Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.929312 4893 generic.go:334] "Generic (PLEG): container finished" podID="ac4cba31-7ca0-4de5-9eef-f1ea959c1823" containerID="16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08" exitCode=0 Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.929379 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"ac4cba31-7ca0-4de5-9eef-f1ea959c1823","Type":"ContainerDied","Data":"16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08"} Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.929403 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"ac4cba31-7ca0-4de5-9eef-f1ea959c1823","Type":"ContainerDied","Data":"fdc5cd2f9e6a497e9a57da61076632656ee1cd6cf2eb99ef70861b313d898f30"} Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.929420 4893 scope.go:117] "RemoveContainer" containerID="16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.929449 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.933083 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"378ab36a-3a2c-4a6d-836f-92eba12307fe","Type":"ContainerStarted","Data":"8c322629cc8dc9db02ceb474090341f2e69cd99f75e1745d3b374d258080adef"} Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.966823 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.967512 4893 scope.go:117] "RemoveContainer" containerID="16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08" Jan 28 15:34:48 crc kubenswrapper[4893]: E0128 15:34:48.971801 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08\": container with ID starting with 16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08 not found: ID does not exist" containerID="16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.971855 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08"} err="failed to get container status \"16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08\": rpc error: code = NotFound desc = could not find container \"16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08\": container with ID starting with 16ffb4ea84423727ea795c398a5f24d6346472e66ab82a4999838bbc1cdf6b08 not found: ID does not exist" Jan 28 15:34:48 crc kubenswrapper[4893]: I0128 15:34:48.987057 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.014898 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:49 crc kubenswrapper[4893]: E0128 15:34:49.015412 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4cba31-7ca0-4de5-9eef-f1ea959c1823" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.015436 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4cba31-7ca0-4de5-9eef-f1ea959c1823" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.015680 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac4cba31-7ca0-4de5-9eef-f1ea959c1823" containerName="nova-kuttl-scheduler-scheduler" Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.016321 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.018647 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.022744 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.165844 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdrm7\" (UniqueName: \"kubernetes.io/projected/00c7d078-56fd-4f9a-a20a-5dc498625eb1-kube-api-access-pdrm7\") pod \"nova-kuttl-scheduler-0\" (UID: \"00c7d078-56fd-4f9a-a20a-5dc498625eb1\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.165890 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c7d078-56fd-4f9a-a20a-5dc498625eb1-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"00c7d078-56fd-4f9a-a20a-5dc498625eb1\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.267441 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdrm7\" (UniqueName: \"kubernetes.io/projected/00c7d078-56fd-4f9a-a20a-5dc498625eb1-kube-api-access-pdrm7\") pod \"nova-kuttl-scheduler-0\" (UID: \"00c7d078-56fd-4f9a-a20a-5dc498625eb1\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.267506 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c7d078-56fd-4f9a-a20a-5dc498625eb1-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"00c7d078-56fd-4f9a-a20a-5dc498625eb1\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.270917 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00c7d078-56fd-4f9a-a20a-5dc498625eb1-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"00c7d078-56fd-4f9a-a20a-5dc498625eb1\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.283981 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdrm7\" (UniqueName: \"kubernetes.io/projected/00c7d078-56fd-4f9a-a20a-5dc498625eb1-kube-api-access-pdrm7\") pod \"nova-kuttl-scheduler-0\" (UID: \"00c7d078-56fd-4f9a-a20a-5dc498625eb1\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.344199 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.776224 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.944212 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"28ce2e6b-b04e-4d88-a01b-101d056e8137","Type":"ContainerStarted","Data":"403ba49985dc43176729ea824b7ae99eb6425193da93d47d2dfbd86a3379c4fd"} Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.944583 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"28ce2e6b-b04e-4d88-a01b-101d056e8137","Type":"ContainerStarted","Data":"83185262b47b2fd3825272bcc08ecdab3cf5a5340a603a3f5bab2bebcfead479"} Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.945439 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"00c7d078-56fd-4f9a-a20a-5dc498625eb1","Type":"ContainerStarted","Data":"b7f7cff8967d48449f63216f75dfba1e844cf12b3171ae5dac06e956908d046d"} Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.948265 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"378ab36a-3a2c-4a6d-836f-92eba12307fe","Type":"ContainerStarted","Data":"57e2309daabf970f7ba1d7467a5a968342aaa51e7498516b8438113af56861b6"} Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.948284 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"378ab36a-3a2c-4a6d-836f-92eba12307fe","Type":"ContainerStarted","Data":"d349b86934d315678eca96ab0b0eee91bd2dfa4cb91da0a96161caa8f26b4826"} Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.973017 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.972994818 podStartE2EDuration="2.972994818s" podCreationTimestamp="2026-01-28 15:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:49.963243933 +0000 UTC m=+2007.736858961" watchObservedRunningTime="2026-01-28 15:34:49.972994818 +0000 UTC m=+2007.746609846" Jan 28 15:34:49 crc kubenswrapper[4893]: I0128 15:34:49.984442 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.984425468 podStartE2EDuration="2.984425468s" podCreationTimestamp="2026-01-28 15:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:49.983459422 +0000 UTC m=+2007.757074440" watchObservedRunningTime="2026-01-28 15:34:49.984425468 +0000 UTC m=+2007.758040496" Jan 28 15:34:50 crc kubenswrapper[4893]: I0128 15:34:50.901803 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac4cba31-7ca0-4de5-9eef-f1ea959c1823" path="/var/lib/kubelet/pods/ac4cba31-7ca0-4de5-9eef-f1ea959c1823/volumes" Jan 28 15:34:50 crc kubenswrapper[4893]: I0128 15:34:50.958897 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"00c7d078-56fd-4f9a-a20a-5dc498625eb1","Type":"ContainerStarted","Data":"19543593b76d82a135cf7dc52bfacaa9e49b76068cfd7861fbf96f4d91f99337"} Jan 28 15:34:50 crc kubenswrapper[4893]: I0128 15:34:50.977977 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.977963291 podStartE2EDuration="2.977963291s" podCreationTimestamp="2026-01-28 15:34:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:34:50.977161249 +0000 UTC m=+2008.750776287" watchObservedRunningTime="2026-01-28 15:34:50.977963291 +0000 UTC m=+2008.751578319" Jan 28 15:34:53 crc kubenswrapper[4893]: I0128 15:34:53.353919 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:53 crc kubenswrapper[4893]: I0128 15:34:53.354248 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:54 crc kubenswrapper[4893]: I0128 15:34:54.345318 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:58 crc kubenswrapper[4893]: I0128 15:34:58.354699 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:58 crc kubenswrapper[4893]: I0128 15:34:58.354971 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:34:58 crc kubenswrapper[4893]: I0128 15:34:58.380551 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:58 crc kubenswrapper[4893]: I0128 15:34:58.380604 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:34:59 crc kubenswrapper[4893]: I0128 15:34:59.346050 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:59 crc kubenswrapper[4893]: I0128 15:34:59.371263 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:34:59 crc kubenswrapper[4893]: I0128 15:34:59.520974 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="378ab36a-3a2c-4a6d-836f-92eba12307fe" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:34:59 crc kubenswrapper[4893]: I0128 15:34:59.521041 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="28ce2e6b-b04e-4d88-a01b-101d056e8137" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.230:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:34:59 crc kubenswrapper[4893]: I0128 15:34:59.521078 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="28ce2e6b-b04e-4d88-a01b-101d056e8137" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.230:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:34:59 crc kubenswrapper[4893]: I0128 15:34:59.521163 4893 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="378ab36a-3a2c-4a6d-836f-92eba12307fe" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:35:00 crc kubenswrapper[4893]: I0128 15:35:00.071303 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 28 15:35:08 crc kubenswrapper[4893]: I0128 15:35:08.357231 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:35:08 crc kubenswrapper[4893]: I0128 15:35:08.358438 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:35:08 crc kubenswrapper[4893]: I0128 15:35:08.360232 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:35:08 crc kubenswrapper[4893]: I0128 15:35:08.361094 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 28 15:35:08 crc kubenswrapper[4893]: I0128 15:35:08.385981 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:35:08 crc kubenswrapper[4893]: I0128 15:35:08.386428 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:35:08 crc kubenswrapper[4893]: I0128 15:35:08.386504 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:35:08 crc kubenswrapper[4893]: I0128 15:35:08.389776 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:35:09 crc kubenswrapper[4893]: I0128 15:35:09.148292 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:35:09 crc kubenswrapper[4893]: I0128 15:35:09.152974 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.474896 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp"] Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.477587 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.480785 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.480960 4893 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.483889 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp"] Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.495726 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-config-data\") pod \"nova-kuttl-cell1-cell-delete-vz6mp\" (UID: \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.495802 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m85vc\" (UniqueName: \"kubernetes.io/projected/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-kube-api-access-m85vc\") pod \"nova-kuttl-cell1-cell-delete-vz6mp\" (UID: \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.496216 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-scripts\") pod \"nova-kuttl-cell1-cell-delete-vz6mp\" (UID: \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.597776 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-config-data\") pod \"nova-kuttl-cell1-cell-delete-vz6mp\" (UID: \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.597858 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m85vc\" (UniqueName: \"kubernetes.io/projected/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-kube-api-access-m85vc\") pod \"nova-kuttl-cell1-cell-delete-vz6mp\" (UID: \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.597926 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-scripts\") pod \"nova-kuttl-cell1-cell-delete-vz6mp\" (UID: \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.603635 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-scripts\") pod \"nova-kuttl-cell1-cell-delete-vz6mp\" (UID: \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.611410 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-config-data\") pod \"nova-kuttl-cell1-cell-delete-vz6mp\" (UID: \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.622894 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m85vc\" (UniqueName: \"kubernetes.io/projected/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-kube-api-access-m85vc\") pod \"nova-kuttl-cell1-cell-delete-vz6mp\" (UID: \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" Jan 28 15:35:11 crc kubenswrapper[4893]: I0128 15:35:11.808006 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" Jan 28 15:35:12 crc kubenswrapper[4893]: I0128 15:35:12.270714 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp"] Jan 28 15:35:12 crc kubenswrapper[4893]: W0128 15:35:12.275118 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1771f4b_e3fc_4a93_8a60_c9c53f248e02.slice/crio-9b7200dd24bdbad8fc773016cd805e1c0940f154008522d1a6fc6f90a3758134 WatchSource:0}: Error finding container 9b7200dd24bdbad8fc773016cd805e1c0940f154008522d1a6fc6f90a3758134: Status 404 returned error can't find the container with id 9b7200dd24bdbad8fc773016cd805e1c0940f154008522d1a6fc6f90a3758134 Jan 28 15:35:13 crc kubenswrapper[4893]: I0128 15:35:13.186437 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerStarted","Data":"441e861acebca074c73f77d0cc25f9faea0cd611b616409e04e38dda9889a237"} Jan 28 15:35:13 crc kubenswrapper[4893]: I0128 15:35:13.186575 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerStarted","Data":"9b7200dd24bdbad8fc773016cd805e1c0940f154008522d1a6fc6f90a3758134"} Jan 28 15:35:13 crc kubenswrapper[4893]: I0128 15:35:13.211881 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podStartSLOduration=2.211860602 podStartE2EDuration="2.211860602s" podCreationTimestamp="2026-01-28 15:35:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:35:13.203381022 +0000 UTC m=+2030.976996050" watchObservedRunningTime="2026-01-28 15:35:13.211860602 +0000 UTC m=+2030.985475630" Jan 28 15:35:18 crc kubenswrapper[4893]: I0128 15:35:18.249048 4893 generic.go:334] "Generic (PLEG): container finished" podID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerID="441e861acebca074c73f77d0cc25f9faea0cd611b616409e04e38dda9889a237" exitCode=2 Jan 28 15:35:18 crc kubenswrapper[4893]: I0128 15:35:18.249892 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerDied","Data":"441e861acebca074c73f77d0cc25f9faea0cd611b616409e04e38dda9889a237"} Jan 28 15:35:18 crc kubenswrapper[4893]: I0128 15:35:18.250700 4893 scope.go:117] "RemoveContainer" containerID="441e861acebca074c73f77d0cc25f9faea0cd611b616409e04e38dda9889a237" Jan 28 15:35:19 crc kubenswrapper[4893]: I0128 15:35:19.263349 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerStarted","Data":"a060f0972693e278ae73fd71f62d566575ebb53c6b10915211160331cdad8767"} Jan 28 15:35:24 crc kubenswrapper[4893]: I0128 15:35:24.359790 4893 generic.go:334] "Generic (PLEG): container finished" podID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerID="a060f0972693e278ae73fd71f62d566575ebb53c6b10915211160331cdad8767" exitCode=2 Jan 28 15:35:24 crc kubenswrapper[4893]: I0128 15:35:24.359934 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerDied","Data":"a060f0972693e278ae73fd71f62d566575ebb53c6b10915211160331cdad8767"} Jan 28 15:35:24 crc kubenswrapper[4893]: I0128 15:35:24.360349 4893 scope.go:117] "RemoveContainer" containerID="441e861acebca074c73f77d0cc25f9faea0cd611b616409e04e38dda9889a237" Jan 28 15:35:24 crc kubenswrapper[4893]: I0128 15:35:24.360972 4893 scope.go:117] "RemoveContainer" containerID="a060f0972693e278ae73fd71f62d566575ebb53c6b10915211160331cdad8767" Jan 28 15:35:24 crc kubenswrapper[4893]: E0128 15:35:24.361341 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:35:31 crc kubenswrapper[4893]: I0128 15:35:31.792324 4893 scope.go:117] "RemoveContainer" containerID="f6a191782dd3bee45b7a085f71c8cf9c4812c16b6828156f71574dff32d9af0b" Jan 28 15:35:31 crc kubenswrapper[4893]: I0128 15:35:31.838877 4893 scope.go:117] "RemoveContainer" containerID="4ffcd14bf457b6e72cb35a5f1d2cced5ecae9770d32e0c76591905938d62c424" Jan 28 15:35:31 crc kubenswrapper[4893]: I0128 15:35:31.867505 4893 scope.go:117] "RemoveContainer" containerID="cfdfcc8472faa19a7fca4ea06a713dcd29df99efb3791cd8c029b237805ba99b" Jan 28 15:35:31 crc kubenswrapper[4893]: I0128 15:35:31.914114 4893 scope.go:117] "RemoveContainer" containerID="5eaff64b1e230f888c692be62b1a691a45657242938fc5ef0184ce86ce4d73fa" Jan 28 15:35:31 crc kubenswrapper[4893]: I0128 15:35:31.950262 4893 scope.go:117] "RemoveContainer" containerID="33224f119e8b5920f7b73ccd3e2c4b87b2d1767328e2569e3763864dcc54f584" Jan 28 15:35:31 crc kubenswrapper[4893]: I0128 15:35:31.993600 4893 scope.go:117] "RemoveContainer" containerID="d9c418afbeb3b342d8024d1e60d149ef61e9073d0920bf25b1e00f8f7a86528b" Jan 28 15:35:32 crc kubenswrapper[4893]: I0128 15:35:32.059560 4893 scope.go:117] "RemoveContainer" containerID="d29d8d17833e2e74cdb55e289921d2faff22569fd1ea0bd607c1425baae46f20" Jan 28 15:35:32 crc kubenswrapper[4893]: I0128 15:35:32.079381 4893 scope.go:117] "RemoveContainer" containerID="3d1b403d5632b8cf08f0c888989edd025237e7157b6141a656eaf8fd87353ba5" Jan 28 15:35:32 crc kubenswrapper[4893]: I0128 15:35:32.101872 4893 scope.go:117] "RemoveContainer" containerID="a7ec881599c5217c35142168671f1668a634a165131961ae038c87ce092a710a" Jan 28 15:35:32 crc kubenswrapper[4893]: I0128 15:35:32.123066 4893 scope.go:117] "RemoveContainer" containerID="128e4dfa038ed31cae55723b617c9fa7f98d74c221322e41d75c629953c58fbf" Jan 28 15:35:39 crc kubenswrapper[4893]: I0128 15:35:39.892981 4893 scope.go:117] "RemoveContainer" containerID="a060f0972693e278ae73fd71f62d566575ebb53c6b10915211160331cdad8767" Jan 28 15:35:40 crc kubenswrapper[4893]: I0128 15:35:40.500743 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerStarted","Data":"500eb56b6f5abfd5ae2f6ff1912b674c11e12e9a1de1830c4d3e7b5a71796803"} Jan 28 15:35:45 crc kubenswrapper[4893]: I0128 15:35:45.544058 4893 generic.go:334] "Generic (PLEG): container finished" podID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerID="500eb56b6f5abfd5ae2f6ff1912b674c11e12e9a1de1830c4d3e7b5a71796803" exitCode=2 Jan 28 15:35:45 crc kubenswrapper[4893]: I0128 15:35:45.544151 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerDied","Data":"500eb56b6f5abfd5ae2f6ff1912b674c11e12e9a1de1830c4d3e7b5a71796803"} Jan 28 15:35:45 crc kubenswrapper[4893]: I0128 15:35:45.544684 4893 scope.go:117] "RemoveContainer" containerID="a060f0972693e278ae73fd71f62d566575ebb53c6b10915211160331cdad8767" Jan 28 15:35:45 crc kubenswrapper[4893]: I0128 15:35:45.545316 4893 scope.go:117] "RemoveContainer" containerID="500eb56b6f5abfd5ae2f6ff1912b674c11e12e9a1de1830c4d3e7b5a71796803" Jan 28 15:35:45 crc kubenswrapper[4893]: E0128 15:35:45.545682 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:35:59 crc kubenswrapper[4893]: I0128 15:35:59.893159 4893 scope.go:117] "RemoveContainer" containerID="500eb56b6f5abfd5ae2f6ff1912b674c11e12e9a1de1830c4d3e7b5a71796803" Jan 28 15:35:59 crc kubenswrapper[4893]: E0128 15:35:59.893833 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:36:13 crc kubenswrapper[4893]: I0128 15:36:13.892209 4893 scope.go:117] "RemoveContainer" containerID="500eb56b6f5abfd5ae2f6ff1912b674c11e12e9a1de1830c4d3e7b5a71796803" Jan 28 15:36:14 crc kubenswrapper[4893]: I0128 15:36:14.799440 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerStarted","Data":"4e0be62e3aff7e2a6365d9d85d5e84eb2886f6e3f7bfcc38130097e8dd03bbb8"} Jan 28 15:36:18 crc kubenswrapper[4893]: I0128 15:36:18.095084 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hjpzk"] Jan 28 15:36:18 crc kubenswrapper[4893]: I0128 15:36:18.097128 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:18 crc kubenswrapper[4893]: I0128 15:36:18.110758 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hjpzk"] Jan 28 15:36:18 crc kubenswrapper[4893]: I0128 15:36:18.187090 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c96798-1557-43ab-ae9a-1a589119aefc-catalog-content\") pod \"redhat-operators-hjpzk\" (UID: \"f8c96798-1557-43ab-ae9a-1a589119aefc\") " pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:18 crc kubenswrapper[4893]: I0128 15:36:18.187184 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq8pk\" (UniqueName: \"kubernetes.io/projected/f8c96798-1557-43ab-ae9a-1a589119aefc-kube-api-access-xq8pk\") pod \"redhat-operators-hjpzk\" (UID: \"f8c96798-1557-43ab-ae9a-1a589119aefc\") " pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:18 crc kubenswrapper[4893]: I0128 15:36:18.187247 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c96798-1557-43ab-ae9a-1a589119aefc-utilities\") pod \"redhat-operators-hjpzk\" (UID: \"f8c96798-1557-43ab-ae9a-1a589119aefc\") " pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:18 crc kubenswrapper[4893]: I0128 15:36:18.289200 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c96798-1557-43ab-ae9a-1a589119aefc-catalog-content\") pod \"redhat-operators-hjpzk\" (UID: \"f8c96798-1557-43ab-ae9a-1a589119aefc\") " pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:18 crc kubenswrapper[4893]: I0128 15:36:18.289682 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq8pk\" (UniqueName: \"kubernetes.io/projected/f8c96798-1557-43ab-ae9a-1a589119aefc-kube-api-access-xq8pk\") pod \"redhat-operators-hjpzk\" (UID: \"f8c96798-1557-43ab-ae9a-1a589119aefc\") " pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:18 crc kubenswrapper[4893]: I0128 15:36:18.289794 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c96798-1557-43ab-ae9a-1a589119aefc-catalog-content\") pod \"redhat-operators-hjpzk\" (UID: \"f8c96798-1557-43ab-ae9a-1a589119aefc\") " pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:18 crc kubenswrapper[4893]: I0128 15:36:18.290951 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c96798-1557-43ab-ae9a-1a589119aefc-utilities\") pod \"redhat-operators-hjpzk\" (UID: \"f8c96798-1557-43ab-ae9a-1a589119aefc\") " pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:18 crc kubenswrapper[4893]: I0128 15:36:18.291290 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c96798-1557-43ab-ae9a-1a589119aefc-utilities\") pod \"redhat-operators-hjpzk\" (UID: \"f8c96798-1557-43ab-ae9a-1a589119aefc\") " pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:18 crc kubenswrapper[4893]: I0128 15:36:18.311503 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq8pk\" (UniqueName: \"kubernetes.io/projected/f8c96798-1557-43ab-ae9a-1a589119aefc-kube-api-access-xq8pk\") pod \"redhat-operators-hjpzk\" (UID: \"f8c96798-1557-43ab-ae9a-1a589119aefc\") " pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:18 crc kubenswrapper[4893]: I0128 15:36:18.418755 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:19 crc kubenswrapper[4893]: I0128 15:36:19.072025 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hjpzk"] Jan 28 15:36:19 crc kubenswrapper[4893]: I0128 15:36:19.839867 4893 generic.go:334] "Generic (PLEG): container finished" podID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerID="4e0be62e3aff7e2a6365d9d85d5e84eb2886f6e3f7bfcc38130097e8dd03bbb8" exitCode=2 Jan 28 15:36:19 crc kubenswrapper[4893]: I0128 15:36:19.840063 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerDied","Data":"4e0be62e3aff7e2a6365d9d85d5e84eb2886f6e3f7bfcc38130097e8dd03bbb8"} Jan 28 15:36:19 crc kubenswrapper[4893]: I0128 15:36:19.840175 4893 scope.go:117] "RemoveContainer" containerID="500eb56b6f5abfd5ae2f6ff1912b674c11e12e9a1de1830c4d3e7b5a71796803" Jan 28 15:36:19 crc kubenswrapper[4893]: I0128 15:36:19.840727 4893 scope.go:117] "RemoveContainer" containerID="4e0be62e3aff7e2a6365d9d85d5e84eb2886f6e3f7bfcc38130097e8dd03bbb8" Jan 28 15:36:19 crc kubenswrapper[4893]: E0128 15:36:19.840989 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:36:19 crc kubenswrapper[4893]: I0128 15:36:19.842584 4893 generic.go:334] "Generic (PLEG): container finished" podID="f8c96798-1557-43ab-ae9a-1a589119aefc" containerID="46cf52c14140bce83d0852829b55dbba49a8e8852399c7bb174493e6f7377f34" exitCode=0 Jan 28 15:36:19 crc kubenswrapper[4893]: I0128 15:36:19.842608 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjpzk" event={"ID":"f8c96798-1557-43ab-ae9a-1a589119aefc","Type":"ContainerDied","Data":"46cf52c14140bce83d0852829b55dbba49a8e8852399c7bb174493e6f7377f34"} Jan 28 15:36:19 crc kubenswrapper[4893]: I0128 15:36:19.842624 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjpzk" event={"ID":"f8c96798-1557-43ab-ae9a-1a589119aefc","Type":"ContainerStarted","Data":"414297071bc337651f8715add75c42b22e6d6de5c05df6442f24450e23566c97"} Jan 28 15:36:19 crc kubenswrapper[4893]: I0128 15:36:19.875194 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:36:22 crc kubenswrapper[4893]: I0128 15:36:22.478538 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjpzk" event={"ID":"f8c96798-1557-43ab-ae9a-1a589119aefc","Type":"ContainerStarted","Data":"97ed86099011e05ec9451fde41502b397dc296553be1c9b26de383ab22eb055b"} Jan 28 15:36:23 crc kubenswrapper[4893]: I0128 15:36:23.489733 4893 generic.go:334] "Generic (PLEG): container finished" podID="f8c96798-1557-43ab-ae9a-1a589119aefc" containerID="97ed86099011e05ec9451fde41502b397dc296553be1c9b26de383ab22eb055b" exitCode=0 Jan 28 15:36:23 crc kubenswrapper[4893]: I0128 15:36:23.489778 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjpzk" event={"ID":"f8c96798-1557-43ab-ae9a-1a589119aefc","Type":"ContainerDied","Data":"97ed86099011e05ec9451fde41502b397dc296553be1c9b26de383ab22eb055b"} Jan 28 15:36:26 crc kubenswrapper[4893]: I0128 15:36:26.517305 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjpzk" event={"ID":"f8c96798-1557-43ab-ae9a-1a589119aefc","Type":"ContainerStarted","Data":"31ae438548994ad39b50abc9fe4e0de391e3a008ab11f1c7a84cec6e6c5211eb"} Jan 28 15:36:26 crc kubenswrapper[4893]: I0128 15:36:26.536318 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hjpzk" podStartSLOduration=3.201312129 podStartE2EDuration="8.536299665s" podCreationTimestamp="2026-01-28 15:36:18 +0000 UTC" firstStartedPulling="2026-01-28 15:36:19.874940701 +0000 UTC m=+2097.648555729" lastFinishedPulling="2026-01-28 15:36:25.209928237 +0000 UTC m=+2102.983543265" observedRunningTime="2026-01-28 15:36:26.534204298 +0000 UTC m=+2104.307819326" watchObservedRunningTime="2026-01-28 15:36:26.536299665 +0000 UTC m=+2104.309914703" Jan 28 15:36:28 crc kubenswrapper[4893]: I0128 15:36:28.418883 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:28 crc kubenswrapper[4893]: I0128 15:36:28.419199 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:29 crc kubenswrapper[4893]: I0128 15:36:29.469671 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hjpzk" podUID="f8c96798-1557-43ab-ae9a-1a589119aefc" containerName="registry-server" probeResult="failure" output=< Jan 28 15:36:29 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 28 15:36:29 crc kubenswrapper[4893]: > Jan 28 15:36:30 crc kubenswrapper[4893]: I0128 15:36:30.891653 4893 scope.go:117] "RemoveContainer" containerID="4e0be62e3aff7e2a6365d9d85d5e84eb2886f6e3f7bfcc38130097e8dd03bbb8" Jan 28 15:36:30 crc kubenswrapper[4893]: E0128 15:36:30.892148 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:36:32 crc kubenswrapper[4893]: I0128 15:36:32.355439 4893 scope.go:117] "RemoveContainer" containerID="a8a6c463fe31f9d95e10ee96f624accda2ac3466897c2010c5ed48f0ea494aa4" Jan 28 15:36:32 crc kubenswrapper[4893]: I0128 15:36:32.396957 4893 scope.go:117] "RemoveContainer" containerID="7afe546e59039d9e982b04cd67336483a8a4ad4c1af2b8c09d04c94f208aa244" Jan 28 15:36:32 crc kubenswrapper[4893]: I0128 15:36:32.416743 4893 scope.go:117] "RemoveContainer" containerID="192ae4f21b4990f917167a00f8af309e685591fd51c8f296a00e2395efcab31b" Jan 28 15:36:32 crc kubenswrapper[4893]: I0128 15:36:32.467010 4893 scope.go:117] "RemoveContainer" containerID="17f1856a1cab1c8c7c0ea08d1e1d0f378fa24ea2f8ebfdd48be6ce9bc2771e3f" Jan 28 15:36:32 crc kubenswrapper[4893]: I0128 15:36:32.489947 4893 scope.go:117] "RemoveContainer" containerID="76586a1d37703fd75294f9a31a07e8090d51f7065ae6a2446cd571869a855ada" Jan 28 15:36:35 crc kubenswrapper[4893]: I0128 15:36:35.722851 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:36:35 crc kubenswrapper[4893]: I0128 15:36:35.723110 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:36:38 crc kubenswrapper[4893]: I0128 15:36:38.471640 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:38 crc kubenswrapper[4893]: I0128 15:36:38.522006 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:38 crc kubenswrapper[4893]: I0128 15:36:38.721146 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hjpzk"] Jan 28 15:36:39 crc kubenswrapper[4893]: I0128 15:36:39.625509 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hjpzk" podUID="f8c96798-1557-43ab-ae9a-1a589119aefc" containerName="registry-server" containerID="cri-o://31ae438548994ad39b50abc9fe4e0de391e3a008ab11f1c7a84cec6e6c5211eb" gracePeriod=2 Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.081883 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.179799 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq8pk\" (UniqueName: \"kubernetes.io/projected/f8c96798-1557-43ab-ae9a-1a589119aefc-kube-api-access-xq8pk\") pod \"f8c96798-1557-43ab-ae9a-1a589119aefc\" (UID: \"f8c96798-1557-43ab-ae9a-1a589119aefc\") " Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.180925 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c96798-1557-43ab-ae9a-1a589119aefc-catalog-content\") pod \"f8c96798-1557-43ab-ae9a-1a589119aefc\" (UID: \"f8c96798-1557-43ab-ae9a-1a589119aefc\") " Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.181183 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c96798-1557-43ab-ae9a-1a589119aefc-utilities\") pod \"f8c96798-1557-43ab-ae9a-1a589119aefc\" (UID: \"f8c96798-1557-43ab-ae9a-1a589119aefc\") " Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.182676 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8c96798-1557-43ab-ae9a-1a589119aefc-utilities" (OuterVolumeSpecName: "utilities") pod "f8c96798-1557-43ab-ae9a-1a589119aefc" (UID: "f8c96798-1557-43ab-ae9a-1a589119aefc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.185947 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8c96798-1557-43ab-ae9a-1a589119aefc-kube-api-access-xq8pk" (OuterVolumeSpecName: "kube-api-access-xq8pk") pod "f8c96798-1557-43ab-ae9a-1a589119aefc" (UID: "f8c96798-1557-43ab-ae9a-1a589119aefc"). InnerVolumeSpecName "kube-api-access-xq8pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.283900 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8c96798-1557-43ab-ae9a-1a589119aefc-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.283948 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq8pk\" (UniqueName: \"kubernetes.io/projected/f8c96798-1557-43ab-ae9a-1a589119aefc-kube-api-access-xq8pk\") on node \"crc\" DevicePath \"\"" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.301892 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8c96798-1557-43ab-ae9a-1a589119aefc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8c96798-1557-43ab-ae9a-1a589119aefc" (UID: "f8c96798-1557-43ab-ae9a-1a589119aefc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.385792 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8c96798-1557-43ab-ae9a-1a589119aefc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.634345 4893 generic.go:334] "Generic (PLEG): container finished" podID="f8c96798-1557-43ab-ae9a-1a589119aefc" containerID="31ae438548994ad39b50abc9fe4e0de391e3a008ab11f1c7a84cec6e6c5211eb" exitCode=0 Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.634393 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjpzk" event={"ID":"f8c96798-1557-43ab-ae9a-1a589119aefc","Type":"ContainerDied","Data":"31ae438548994ad39b50abc9fe4e0de391e3a008ab11f1c7a84cec6e6c5211eb"} Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.634405 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hjpzk" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.634435 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjpzk" event={"ID":"f8c96798-1557-43ab-ae9a-1a589119aefc","Type":"ContainerDied","Data":"414297071bc337651f8715add75c42b22e6d6de5c05df6442f24450e23566c97"} Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.634455 4893 scope.go:117] "RemoveContainer" containerID="31ae438548994ad39b50abc9fe4e0de391e3a008ab11f1c7a84cec6e6c5211eb" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.650565 4893 scope.go:117] "RemoveContainer" containerID="97ed86099011e05ec9451fde41502b397dc296553be1c9b26de383ab22eb055b" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.681253 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hjpzk"] Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.687974 4893 scope.go:117] "RemoveContainer" containerID="46cf52c14140bce83d0852829b55dbba49a8e8852399c7bb174493e6f7377f34" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.693653 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hjpzk"] Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.718678 4893 scope.go:117] "RemoveContainer" containerID="31ae438548994ad39b50abc9fe4e0de391e3a008ab11f1c7a84cec6e6c5211eb" Jan 28 15:36:40 crc kubenswrapper[4893]: E0128 15:36:40.719158 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31ae438548994ad39b50abc9fe4e0de391e3a008ab11f1c7a84cec6e6c5211eb\": container with ID starting with 31ae438548994ad39b50abc9fe4e0de391e3a008ab11f1c7a84cec6e6c5211eb not found: ID does not exist" containerID="31ae438548994ad39b50abc9fe4e0de391e3a008ab11f1c7a84cec6e6c5211eb" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.719189 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31ae438548994ad39b50abc9fe4e0de391e3a008ab11f1c7a84cec6e6c5211eb"} err="failed to get container status \"31ae438548994ad39b50abc9fe4e0de391e3a008ab11f1c7a84cec6e6c5211eb\": rpc error: code = NotFound desc = could not find container \"31ae438548994ad39b50abc9fe4e0de391e3a008ab11f1c7a84cec6e6c5211eb\": container with ID starting with 31ae438548994ad39b50abc9fe4e0de391e3a008ab11f1c7a84cec6e6c5211eb not found: ID does not exist" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.719212 4893 scope.go:117] "RemoveContainer" containerID="97ed86099011e05ec9451fde41502b397dc296553be1c9b26de383ab22eb055b" Jan 28 15:36:40 crc kubenswrapper[4893]: E0128 15:36:40.719614 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97ed86099011e05ec9451fde41502b397dc296553be1c9b26de383ab22eb055b\": container with ID starting with 97ed86099011e05ec9451fde41502b397dc296553be1c9b26de383ab22eb055b not found: ID does not exist" containerID="97ed86099011e05ec9451fde41502b397dc296553be1c9b26de383ab22eb055b" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.719635 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ed86099011e05ec9451fde41502b397dc296553be1c9b26de383ab22eb055b"} err="failed to get container status \"97ed86099011e05ec9451fde41502b397dc296553be1c9b26de383ab22eb055b\": rpc error: code = NotFound desc = could not find container \"97ed86099011e05ec9451fde41502b397dc296553be1c9b26de383ab22eb055b\": container with ID starting with 97ed86099011e05ec9451fde41502b397dc296553be1c9b26de383ab22eb055b not found: ID does not exist" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.719654 4893 scope.go:117] "RemoveContainer" containerID="46cf52c14140bce83d0852829b55dbba49a8e8852399c7bb174493e6f7377f34" Jan 28 15:36:40 crc kubenswrapper[4893]: E0128 15:36:40.720110 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46cf52c14140bce83d0852829b55dbba49a8e8852399c7bb174493e6f7377f34\": container with ID starting with 46cf52c14140bce83d0852829b55dbba49a8e8852399c7bb174493e6f7377f34 not found: ID does not exist" containerID="46cf52c14140bce83d0852829b55dbba49a8e8852399c7bb174493e6f7377f34" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.720172 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46cf52c14140bce83d0852829b55dbba49a8e8852399c7bb174493e6f7377f34"} err="failed to get container status \"46cf52c14140bce83d0852829b55dbba49a8e8852399c7bb174493e6f7377f34\": rpc error: code = NotFound desc = could not find container \"46cf52c14140bce83d0852829b55dbba49a8e8852399c7bb174493e6f7377f34\": container with ID starting with 46cf52c14140bce83d0852829b55dbba49a8e8852399c7bb174493e6f7377f34 not found: ID does not exist" Jan 28 15:36:40 crc kubenswrapper[4893]: I0128 15:36:40.902930 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8c96798-1557-43ab-ae9a-1a589119aefc" path="/var/lib/kubelet/pods/f8c96798-1557-43ab-ae9a-1a589119aefc/volumes" Jan 28 15:36:42 crc kubenswrapper[4893]: I0128 15:36:42.896079 4893 scope.go:117] "RemoveContainer" containerID="4e0be62e3aff7e2a6365d9d85d5e84eb2886f6e3f7bfcc38130097e8dd03bbb8" Jan 28 15:36:42 crc kubenswrapper[4893]: E0128 15:36:42.896572 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:36:56 crc kubenswrapper[4893]: I0128 15:36:56.892425 4893 scope.go:117] "RemoveContainer" containerID="4e0be62e3aff7e2a6365d9d85d5e84eb2886f6e3f7bfcc38130097e8dd03bbb8" Jan 28 15:36:56 crc kubenswrapper[4893]: E0128 15:36:56.893025 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:37:05 crc kubenswrapper[4893]: I0128 15:37:05.722343 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:37:05 crc kubenswrapper[4893]: I0128 15:37:05.723811 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:37:10 crc kubenswrapper[4893]: I0128 15:37:10.891985 4893 scope.go:117] "RemoveContainer" containerID="4e0be62e3aff7e2a6365d9d85d5e84eb2886f6e3f7bfcc38130097e8dd03bbb8" Jan 28 15:37:11 crc kubenswrapper[4893]: I0128 15:37:11.877851 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerStarted","Data":"b5416cca10f707e0505d84954cee83d8dc7a7bdd7e075e0cd238abd4f2be1d54"} Jan 28 15:37:16 crc kubenswrapper[4893]: I0128 15:37:16.920200 4893 generic.go:334] "Generic (PLEG): container finished" podID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerID="b5416cca10f707e0505d84954cee83d8dc7a7bdd7e075e0cd238abd4f2be1d54" exitCode=2 Jan 28 15:37:16 crc kubenswrapper[4893]: I0128 15:37:16.920306 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerDied","Data":"b5416cca10f707e0505d84954cee83d8dc7a7bdd7e075e0cd238abd4f2be1d54"} Jan 28 15:37:16 crc kubenswrapper[4893]: I0128 15:37:16.920622 4893 scope.go:117] "RemoveContainer" containerID="4e0be62e3aff7e2a6365d9d85d5e84eb2886f6e3f7bfcc38130097e8dd03bbb8" Jan 28 15:37:16 crc kubenswrapper[4893]: I0128 15:37:16.921873 4893 scope.go:117] "RemoveContainer" containerID="b5416cca10f707e0505d84954cee83d8dc7a7bdd7e075e0cd238abd4f2be1d54" Jan 28 15:37:16 crc kubenswrapper[4893]: E0128 15:37:16.922508 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:37:27 crc kubenswrapper[4893]: I0128 15:37:27.891995 4893 scope.go:117] "RemoveContainer" containerID="b5416cca10f707e0505d84954cee83d8dc7a7bdd7e075e0cd238abd4f2be1d54" Jan 28 15:37:27 crc kubenswrapper[4893]: E0128 15:37:27.892740 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:37:32 crc kubenswrapper[4893]: I0128 15:37:32.600896 4893 scope.go:117] "RemoveContainer" containerID="9deb24d2a03765eb443d2d00b347f94bedb035551d2cd92297fd642e804f79a2" Jan 28 15:37:32 crc kubenswrapper[4893]: I0128 15:37:32.631976 4893 scope.go:117] "RemoveContainer" containerID="4e56d304cd1b510fe02ccca7c135cc7019afafa6cb02dd0f61b3c6406cf1320d" Jan 28 15:37:32 crc kubenswrapper[4893]: I0128 15:37:32.650408 4893 scope.go:117] "RemoveContainer" containerID="abae1f18a2baaf05ac990b2a23424bc8e1f5ffe2c192a429562cafb9405b5e86" Jan 28 15:37:32 crc kubenswrapper[4893]: I0128 15:37:32.670466 4893 scope.go:117] "RemoveContainer" containerID="a624bf19f62397cc4382015857d1007ed36b83c5e86bb8f1e88e57e8a4a7396b" Jan 28 15:37:35 crc kubenswrapper[4893]: I0128 15:37:35.722440 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:37:35 crc kubenswrapper[4893]: I0128 15:37:35.722857 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:37:35 crc kubenswrapper[4893]: I0128 15:37:35.722943 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:37:35 crc kubenswrapper[4893]: I0128 15:37:35.723677 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1e89d3b5214b1e2076651a4fdac0f9f4db53c16fe20d6f51f420c4a7e4e5bf5"} pod="openshift-machine-config-operator/machine-config-daemon-l2nht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:37:35 crc kubenswrapper[4893]: I0128 15:37:35.723737 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" containerID="cri-o://d1e89d3b5214b1e2076651a4fdac0f9f4db53c16fe20d6f51f420c4a7e4e5bf5" gracePeriod=600 Jan 28 15:37:36 crc kubenswrapper[4893]: I0128 15:37:36.089938 4893 generic.go:334] "Generic (PLEG): container finished" podID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerID="d1e89d3b5214b1e2076651a4fdac0f9f4db53c16fe20d6f51f420c4a7e4e5bf5" exitCode=0 Jan 28 15:37:36 crc kubenswrapper[4893]: I0128 15:37:36.089982 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerDied","Data":"d1e89d3b5214b1e2076651a4fdac0f9f4db53c16fe20d6f51f420c4a7e4e5bf5"} Jan 28 15:37:36 crc kubenswrapper[4893]: I0128 15:37:36.090007 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerStarted","Data":"cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01"} Jan 28 15:37:36 crc kubenswrapper[4893]: I0128 15:37:36.090023 4893 scope.go:117] "RemoveContainer" containerID="79c9bc3001d912e2badd455b42599656e0466fe2e0dd3c42a7967baabf46af51" Jan 28 15:37:39 crc kubenswrapper[4893]: I0128 15:37:39.892264 4893 scope.go:117] "RemoveContainer" containerID="b5416cca10f707e0505d84954cee83d8dc7a7bdd7e075e0cd238abd4f2be1d54" Jan 28 15:37:39 crc kubenswrapper[4893]: E0128 15:37:39.893115 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:37:51 crc kubenswrapper[4893]: I0128 15:37:51.893058 4893 scope.go:117] "RemoveContainer" containerID="b5416cca10f707e0505d84954cee83d8dc7a7bdd7e075e0cd238abd4f2be1d54" Jan 28 15:37:51 crc kubenswrapper[4893]: E0128 15:37:51.893688 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:38:02 crc kubenswrapper[4893]: I0128 15:38:02.898370 4893 scope.go:117] "RemoveContainer" containerID="b5416cca10f707e0505d84954cee83d8dc7a7bdd7e075e0cd238abd4f2be1d54" Jan 28 15:38:02 crc kubenswrapper[4893]: E0128 15:38:02.899271 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:38:17 crc kubenswrapper[4893]: I0128 15:38:17.892382 4893 scope.go:117] "RemoveContainer" containerID="b5416cca10f707e0505d84954cee83d8dc7a7bdd7e075e0cd238abd4f2be1d54" Jan 28 15:38:17 crc kubenswrapper[4893]: E0128 15:38:17.893086 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:38:28 crc kubenswrapper[4893]: I0128 15:38:28.892330 4893 scope.go:117] "RemoveContainer" containerID="b5416cca10f707e0505d84954cee83d8dc7a7bdd7e075e0cd238abd4f2be1d54" Jan 28 15:38:28 crc kubenswrapper[4893]: E0128 15:38:28.893098 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:38:32 crc kubenswrapper[4893]: I0128 15:38:32.747746 4893 scope.go:117] "RemoveContainer" containerID="330b3897c89ddc89fd844eb0f1f66171322e6c54b44fd00cfe00e20c2f9a7987" Jan 28 15:38:32 crc kubenswrapper[4893]: I0128 15:38:32.774104 4893 scope.go:117] "RemoveContainer" containerID="c349ed8d33f428bfc9fe593c73b18a2fd3b6b0e70a38ea342dda8fd66a8f99c9" Jan 28 15:38:32 crc kubenswrapper[4893]: I0128 15:38:32.818802 4893 scope.go:117] "RemoveContainer" containerID="c0a69b997ccbe872776643df080ac65a53c48107a6e9f224e6c5c7c8a12875ac" Jan 28 15:38:32 crc kubenswrapper[4893]: I0128 15:38:32.853921 4893 scope.go:117] "RemoveContainer" containerID="0a1369b8f1048e1e0278a952fbefa77b088ce6ec42c8bb388dea79dbb15a2a0d" Jan 28 15:38:32 crc kubenswrapper[4893]: I0128 15:38:32.894999 4893 scope.go:117] "RemoveContainer" containerID="4c212a020f51331b43f6394d092d80a2c6ebc176b74b2247ec0f73b2031d7a82" Jan 28 15:38:32 crc kubenswrapper[4893]: I0128 15:38:32.941198 4893 scope.go:117] "RemoveContainer" containerID="e278042b5d30c73d8d779b41754f50c08b6f7213039453987843d28100a2907e" Jan 28 15:38:32 crc kubenswrapper[4893]: I0128 15:38:32.964644 4893 scope.go:117] "RemoveContainer" containerID="93504058c39d3c4aa71bddcce276392c7a9bfdbbda084d7f5fc64a4e16f0eb5a" Jan 28 15:38:32 crc kubenswrapper[4893]: I0128 15:38:32.983396 4893 scope.go:117] "RemoveContainer" containerID="7c1b2c4d8c0d7129c7b00b10d31b549824cea598aa25f027600826dc9d0bc3ed" Jan 28 15:38:33 crc kubenswrapper[4893]: I0128 15:38:33.002387 4893 scope.go:117] "RemoveContainer" containerID="176db77ef62236850a5a811427593898e8b70ac8aeafe7650d275e55cc72f6bf" Jan 28 15:38:33 crc kubenswrapper[4893]: I0128 15:38:33.040595 4893 scope.go:117] "RemoveContainer" containerID="9e309ea66e2f9e8d9137ef99df5f9ec42b132b30ecbd2f3d341d41b751ffffa4" Jan 28 15:38:33 crc kubenswrapper[4893]: I0128 15:38:33.062434 4893 scope.go:117] "RemoveContainer" containerID="4bbcac9de4ae0c2c1c1bc6069ca1d9a27ee98144bf47baea9bee7efa48f036e0" Jan 28 15:38:33 crc kubenswrapper[4893]: I0128 15:38:33.084174 4893 scope.go:117] "RemoveContainer" containerID="be5e36c8292480ef4d7e345dd4444f028fc0b36d91c9baeedd65a2cf266f70b7" Jan 28 15:38:33 crc kubenswrapper[4893]: I0128 15:38:33.107226 4893 scope.go:117] "RemoveContainer" containerID="ecd138730c777d575b3107add31046c2cf963d2f743399215d0c1bb44c20c7fd" Jan 28 15:38:33 crc kubenswrapper[4893]: I0128 15:38:33.136485 4893 scope.go:117] "RemoveContainer" containerID="39cce8be2d123446b1c3511dba6cb888613a2a0effdd5d86d24143e5bb07ae19" Jan 28 15:38:43 crc kubenswrapper[4893]: I0128 15:38:43.891617 4893 scope.go:117] "RemoveContainer" containerID="b5416cca10f707e0505d84954cee83d8dc7a7bdd7e075e0cd238abd4f2be1d54" Jan 28 15:38:44 crc kubenswrapper[4893]: I0128 15:38:44.685422 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerStarted","Data":"7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855"} Jan 28 15:38:49 crc kubenswrapper[4893]: I0128 15:38:49.723787 4893 generic.go:334] "Generic (PLEG): container finished" podID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" exitCode=2 Jan 28 15:38:49 crc kubenswrapper[4893]: I0128 15:38:49.723829 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerDied","Data":"7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855"} Jan 28 15:38:49 crc kubenswrapper[4893]: I0128 15:38:49.725447 4893 scope.go:117] "RemoveContainer" containerID="b5416cca10f707e0505d84954cee83d8dc7a7bdd7e075e0cd238abd4f2be1d54" Jan 28 15:38:49 crc kubenswrapper[4893]: I0128 15:38:49.726050 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:38:49 crc kubenswrapper[4893]: E0128 15:38:49.726453 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:39:00 crc kubenswrapper[4893]: I0128 15:39:00.892125 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:39:00 crc kubenswrapper[4893]: E0128 15:39:00.892791 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:39:13 crc kubenswrapper[4893]: I0128 15:39:13.892022 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:39:13 crc kubenswrapper[4893]: E0128 15:39:13.893038 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:39:28 crc kubenswrapper[4893]: I0128 15:39:28.891764 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:39:28 crc kubenswrapper[4893]: E0128 15:39:28.892462 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:39:40 crc kubenswrapper[4893]: I0128 15:39:40.891554 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:39:40 crc kubenswrapper[4893]: E0128 15:39:40.892362 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.399803 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-87l6c"] Jan 28 15:39:45 crc kubenswrapper[4893]: E0128 15:39:45.401277 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c96798-1557-43ab-ae9a-1a589119aefc" containerName="registry-server" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.401295 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c96798-1557-43ab-ae9a-1a589119aefc" containerName="registry-server" Jan 28 15:39:45 crc kubenswrapper[4893]: E0128 15:39:45.401312 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c96798-1557-43ab-ae9a-1a589119aefc" containerName="extract-utilities" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.401320 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c96798-1557-43ab-ae9a-1a589119aefc" containerName="extract-utilities" Jan 28 15:39:45 crc kubenswrapper[4893]: E0128 15:39:45.401344 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8c96798-1557-43ab-ae9a-1a589119aefc" containerName="extract-content" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.401352 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8c96798-1557-43ab-ae9a-1a589119aefc" containerName="extract-content" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.401562 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8c96798-1557-43ab-ae9a-1a589119aefc" containerName="registry-server" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.403053 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.410216 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf686c4-d25f-4ff7-893f-80f579240547-catalog-content\") pod \"certified-operators-87l6c\" (UID: \"6cf686c4-d25f-4ff7-893f-80f579240547\") " pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.410698 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf686c4-d25f-4ff7-893f-80f579240547-utilities\") pod \"certified-operators-87l6c\" (UID: \"6cf686c4-d25f-4ff7-893f-80f579240547\") " pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.410862 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd9nf\" (UniqueName: \"kubernetes.io/projected/6cf686c4-d25f-4ff7-893f-80f579240547-kube-api-access-jd9nf\") pod \"certified-operators-87l6c\" (UID: \"6cf686c4-d25f-4ff7-893f-80f579240547\") " pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.423163 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-87l6c"] Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.512303 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf686c4-d25f-4ff7-893f-80f579240547-utilities\") pod \"certified-operators-87l6c\" (UID: \"6cf686c4-d25f-4ff7-893f-80f579240547\") " pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.512385 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd9nf\" (UniqueName: \"kubernetes.io/projected/6cf686c4-d25f-4ff7-893f-80f579240547-kube-api-access-jd9nf\") pod \"certified-operators-87l6c\" (UID: \"6cf686c4-d25f-4ff7-893f-80f579240547\") " pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.512836 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf686c4-d25f-4ff7-893f-80f579240547-catalog-content\") pod \"certified-operators-87l6c\" (UID: \"6cf686c4-d25f-4ff7-893f-80f579240547\") " pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.513334 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf686c4-d25f-4ff7-893f-80f579240547-utilities\") pod \"certified-operators-87l6c\" (UID: \"6cf686c4-d25f-4ff7-893f-80f579240547\") " pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.513514 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf686c4-d25f-4ff7-893f-80f579240547-catalog-content\") pod \"certified-operators-87l6c\" (UID: \"6cf686c4-d25f-4ff7-893f-80f579240547\") " pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.544584 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd9nf\" (UniqueName: \"kubernetes.io/projected/6cf686c4-d25f-4ff7-893f-80f579240547-kube-api-access-jd9nf\") pod \"certified-operators-87l6c\" (UID: \"6cf686c4-d25f-4ff7-893f-80f579240547\") " pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:45 crc kubenswrapper[4893]: I0128 15:39:45.747157 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:46 crc kubenswrapper[4893]: I0128 15:39:46.261739 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-87l6c"] Jan 28 15:39:47 crc kubenswrapper[4893]: I0128 15:39:47.236848 4893 generic.go:334] "Generic (PLEG): container finished" podID="6cf686c4-d25f-4ff7-893f-80f579240547" containerID="9fa3723c3e29b8d744feba9699ea3f8e03ed60d4b5db5a1e0bd0800ea997d341" exitCode=0 Jan 28 15:39:47 crc kubenswrapper[4893]: I0128 15:39:47.236947 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87l6c" event={"ID":"6cf686c4-d25f-4ff7-893f-80f579240547","Type":"ContainerDied","Data":"9fa3723c3e29b8d744feba9699ea3f8e03ed60d4b5db5a1e0bd0800ea997d341"} Jan 28 15:39:47 crc kubenswrapper[4893]: I0128 15:39:47.237177 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87l6c" event={"ID":"6cf686c4-d25f-4ff7-893f-80f579240547","Type":"ContainerStarted","Data":"b95716746fbc73b44c72c326cae8b26720ac561dc7782f7e332bf05b48936fb6"} Jan 28 15:39:49 crc kubenswrapper[4893]: I0128 15:39:49.255045 4893 generic.go:334] "Generic (PLEG): container finished" podID="6cf686c4-d25f-4ff7-893f-80f579240547" containerID="b423092b59241f4db85b20d4b0e70ce995a65de4dc0841e84d2fbd48c1fc343e" exitCode=0 Jan 28 15:39:49 crc kubenswrapper[4893]: I0128 15:39:49.255265 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87l6c" event={"ID":"6cf686c4-d25f-4ff7-893f-80f579240547","Type":"ContainerDied","Data":"b423092b59241f4db85b20d4b0e70ce995a65de4dc0841e84d2fbd48c1fc343e"} Jan 28 15:39:50 crc kubenswrapper[4893]: I0128 15:39:50.276924 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87l6c" event={"ID":"6cf686c4-d25f-4ff7-893f-80f579240547","Type":"ContainerStarted","Data":"2da9724d0331fd7f9904fc972dec62903a03fbe9f0d9dababb6a24e7053f75d7"} Jan 28 15:39:52 crc kubenswrapper[4893]: I0128 15:39:52.897177 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:39:52 crc kubenswrapper[4893]: E0128 15:39:52.897549 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:39:55 crc kubenswrapper[4893]: I0128 15:39:55.747413 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:55 crc kubenswrapper[4893]: I0128 15:39:55.748088 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:55 crc kubenswrapper[4893]: I0128 15:39:55.788274 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:55 crc kubenswrapper[4893]: I0128 15:39:55.816704 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-87l6c" podStartSLOduration=8.325963797 podStartE2EDuration="10.816682884s" podCreationTimestamp="2026-01-28 15:39:45 +0000 UTC" firstStartedPulling="2026-01-28 15:39:47.238790713 +0000 UTC m=+2305.012405741" lastFinishedPulling="2026-01-28 15:39:49.7295098 +0000 UTC m=+2307.503124828" observedRunningTime="2026-01-28 15:39:50.303233318 +0000 UTC m=+2308.076848356" watchObservedRunningTime="2026-01-28 15:39:55.816682884 +0000 UTC m=+2313.590297942" Jan 28 15:39:56 crc kubenswrapper[4893]: I0128 15:39:56.369007 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:56 crc kubenswrapper[4893]: I0128 15:39:56.423555 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-87l6c"] Jan 28 15:39:58 crc kubenswrapper[4893]: I0128 15:39:58.352841 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-87l6c" podUID="6cf686c4-d25f-4ff7-893f-80f579240547" containerName="registry-server" containerID="cri-o://2da9724d0331fd7f9904fc972dec62903a03fbe9f0d9dababb6a24e7053f75d7" gracePeriod=2 Jan 28 15:39:58 crc kubenswrapper[4893]: I0128 15:39:58.775311 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:58 crc kubenswrapper[4893]: I0128 15:39:58.846343 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf686c4-d25f-4ff7-893f-80f579240547-utilities\") pod \"6cf686c4-d25f-4ff7-893f-80f579240547\" (UID: \"6cf686c4-d25f-4ff7-893f-80f579240547\") " Jan 28 15:39:58 crc kubenswrapper[4893]: I0128 15:39:58.846611 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf686c4-d25f-4ff7-893f-80f579240547-catalog-content\") pod \"6cf686c4-d25f-4ff7-893f-80f579240547\" (UID: \"6cf686c4-d25f-4ff7-893f-80f579240547\") " Jan 28 15:39:58 crc kubenswrapper[4893]: I0128 15:39:58.846650 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd9nf\" (UniqueName: \"kubernetes.io/projected/6cf686c4-d25f-4ff7-893f-80f579240547-kube-api-access-jd9nf\") pod \"6cf686c4-d25f-4ff7-893f-80f579240547\" (UID: \"6cf686c4-d25f-4ff7-893f-80f579240547\") " Jan 28 15:39:58 crc kubenswrapper[4893]: I0128 15:39:58.847524 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cf686c4-d25f-4ff7-893f-80f579240547-utilities" (OuterVolumeSpecName: "utilities") pod "6cf686c4-d25f-4ff7-893f-80f579240547" (UID: "6cf686c4-d25f-4ff7-893f-80f579240547"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:39:58 crc kubenswrapper[4893]: I0128 15:39:58.854120 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf686c4-d25f-4ff7-893f-80f579240547-kube-api-access-jd9nf" (OuterVolumeSpecName: "kube-api-access-jd9nf") pod "6cf686c4-d25f-4ff7-893f-80f579240547" (UID: "6cf686c4-d25f-4ff7-893f-80f579240547"). InnerVolumeSpecName "kube-api-access-jd9nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:39:58 crc kubenswrapper[4893]: I0128 15:39:58.903436 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cf686c4-d25f-4ff7-893f-80f579240547-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cf686c4-d25f-4ff7-893f-80f579240547" (UID: "6cf686c4-d25f-4ff7-893f-80f579240547"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:39:58 crc kubenswrapper[4893]: I0128 15:39:58.948576 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf686c4-d25f-4ff7-893f-80f579240547-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:39:58 crc kubenswrapper[4893]: I0128 15:39:58.948623 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd9nf\" (UniqueName: \"kubernetes.io/projected/6cf686c4-d25f-4ff7-893f-80f579240547-kube-api-access-jd9nf\") on node \"crc\" DevicePath \"\"" Jan 28 15:39:58 crc kubenswrapper[4893]: I0128 15:39:58.948639 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf686c4-d25f-4ff7-893f-80f579240547-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.363756 4893 generic.go:334] "Generic (PLEG): container finished" podID="6cf686c4-d25f-4ff7-893f-80f579240547" containerID="2da9724d0331fd7f9904fc972dec62903a03fbe9f0d9dababb6a24e7053f75d7" exitCode=0 Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.363802 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87l6c" event={"ID":"6cf686c4-d25f-4ff7-893f-80f579240547","Type":"ContainerDied","Data":"2da9724d0331fd7f9904fc972dec62903a03fbe9f0d9dababb6a24e7053f75d7"} Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.363832 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87l6c" event={"ID":"6cf686c4-d25f-4ff7-893f-80f579240547","Type":"ContainerDied","Data":"b95716746fbc73b44c72c326cae8b26720ac561dc7782f7e332bf05b48936fb6"} Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.363843 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87l6c" Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.363849 4893 scope.go:117] "RemoveContainer" containerID="2da9724d0331fd7f9904fc972dec62903a03fbe9f0d9dababb6a24e7053f75d7" Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.383828 4893 scope.go:117] "RemoveContainer" containerID="b423092b59241f4db85b20d4b0e70ce995a65de4dc0841e84d2fbd48c1fc343e" Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.412703 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-87l6c"] Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.421273 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-87l6c"] Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.424209 4893 scope.go:117] "RemoveContainer" containerID="9fa3723c3e29b8d744feba9699ea3f8e03ed60d4b5db5a1e0bd0800ea997d341" Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.442131 4893 scope.go:117] "RemoveContainer" containerID="2da9724d0331fd7f9904fc972dec62903a03fbe9f0d9dababb6a24e7053f75d7" Jan 28 15:39:59 crc kubenswrapper[4893]: E0128 15:39:59.442665 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2da9724d0331fd7f9904fc972dec62903a03fbe9f0d9dababb6a24e7053f75d7\": container with ID starting with 2da9724d0331fd7f9904fc972dec62903a03fbe9f0d9dababb6a24e7053f75d7 not found: ID does not exist" containerID="2da9724d0331fd7f9904fc972dec62903a03fbe9f0d9dababb6a24e7053f75d7" Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.442702 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2da9724d0331fd7f9904fc972dec62903a03fbe9f0d9dababb6a24e7053f75d7"} err="failed to get container status \"2da9724d0331fd7f9904fc972dec62903a03fbe9f0d9dababb6a24e7053f75d7\": rpc error: code = NotFound desc = could not find container \"2da9724d0331fd7f9904fc972dec62903a03fbe9f0d9dababb6a24e7053f75d7\": container with ID starting with 2da9724d0331fd7f9904fc972dec62903a03fbe9f0d9dababb6a24e7053f75d7 not found: ID does not exist" Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.442730 4893 scope.go:117] "RemoveContainer" containerID="b423092b59241f4db85b20d4b0e70ce995a65de4dc0841e84d2fbd48c1fc343e" Jan 28 15:39:59 crc kubenswrapper[4893]: E0128 15:39:59.443360 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b423092b59241f4db85b20d4b0e70ce995a65de4dc0841e84d2fbd48c1fc343e\": container with ID starting with b423092b59241f4db85b20d4b0e70ce995a65de4dc0841e84d2fbd48c1fc343e not found: ID does not exist" containerID="b423092b59241f4db85b20d4b0e70ce995a65de4dc0841e84d2fbd48c1fc343e" Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.443495 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b423092b59241f4db85b20d4b0e70ce995a65de4dc0841e84d2fbd48c1fc343e"} err="failed to get container status \"b423092b59241f4db85b20d4b0e70ce995a65de4dc0841e84d2fbd48c1fc343e\": rpc error: code = NotFound desc = could not find container \"b423092b59241f4db85b20d4b0e70ce995a65de4dc0841e84d2fbd48c1fc343e\": container with ID starting with b423092b59241f4db85b20d4b0e70ce995a65de4dc0841e84d2fbd48c1fc343e not found: ID does not exist" Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.443578 4893 scope.go:117] "RemoveContainer" containerID="9fa3723c3e29b8d744feba9699ea3f8e03ed60d4b5db5a1e0bd0800ea997d341" Jan 28 15:39:59 crc kubenswrapper[4893]: E0128 15:39:59.443987 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fa3723c3e29b8d744feba9699ea3f8e03ed60d4b5db5a1e0bd0800ea997d341\": container with ID starting with 9fa3723c3e29b8d744feba9699ea3f8e03ed60d4b5db5a1e0bd0800ea997d341 not found: ID does not exist" containerID="9fa3723c3e29b8d744feba9699ea3f8e03ed60d4b5db5a1e0bd0800ea997d341" Jan 28 15:39:59 crc kubenswrapper[4893]: I0128 15:39:59.444045 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fa3723c3e29b8d744feba9699ea3f8e03ed60d4b5db5a1e0bd0800ea997d341"} err="failed to get container status \"9fa3723c3e29b8d744feba9699ea3f8e03ed60d4b5db5a1e0bd0800ea997d341\": rpc error: code = NotFound desc = could not find container \"9fa3723c3e29b8d744feba9699ea3f8e03ed60d4b5db5a1e0bd0800ea997d341\": container with ID starting with 9fa3723c3e29b8d744feba9699ea3f8e03ed60d4b5db5a1e0bd0800ea997d341 not found: ID does not exist" Jan 28 15:40:00 crc kubenswrapper[4893]: I0128 15:40:00.901775 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cf686c4-d25f-4ff7-893f-80f579240547" path="/var/lib/kubelet/pods/6cf686c4-d25f-4ff7-893f-80f579240547/volumes" Jan 28 15:40:03 crc kubenswrapper[4893]: I0128 15:40:03.891268 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:40:03 crc kubenswrapper[4893]: E0128 15:40:03.892572 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:40:05 crc kubenswrapper[4893]: I0128 15:40:05.722154 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:40:05 crc kubenswrapper[4893]: I0128 15:40:05.722222 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:40:15 crc kubenswrapper[4893]: I0128 15:40:15.891360 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:40:15 crc kubenswrapper[4893]: E0128 15:40:15.892109 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:40:16 crc kubenswrapper[4893]: I0128 15:40:16.502008 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-86bf966444-cll8k_a79fa730-be33-48f7-9ef0-7964e2afbede/keystone-api/0.log" Jan 28 15:40:19 crc kubenswrapper[4893]: I0128 15:40:19.658133 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_4e4c3f33-0d15-4434-9940-21a310e1e272/memcached/0.log" Jan 28 15:40:20 crc kubenswrapper[4893]: I0128 15:40:20.170010 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-a703-account-create-update-r88t7_25b927e3-d3f5-4343-af70-bc2eb39a539c/mariadb-account-create-update/0.log" Jan 28 15:40:20 crc kubenswrapper[4893]: I0128 15:40:20.705435 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-db-create-hc6tn_4bb0a658-a2dc-4442-a362-e1a6fd576848/mariadb-database-create/0.log" Jan 28 15:40:21 crc kubenswrapper[4893]: I0128 15:40:21.218549 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-9d70-account-create-update-97bqx_51a939d5-f485-40b5-bc7b-05d3e063db83/mariadb-account-create-update/0.log" Jan 28 15:40:21 crc kubenswrapper[4893]: I0128 15:40:21.782244 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-db-create-vxgfw_5bf5b624-d148-4c17-8824-77512ecaadba/mariadb-database-create/0.log" Jan 28 15:40:22 crc kubenswrapper[4893]: I0128 15:40:22.408811 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-2799-account-create-update-rspsx_4549a6c6-f4a4-463a-8b6e-2a0d7edeae42/mariadb-account-create-update/0.log" Jan 28 15:40:22 crc kubenswrapper[4893]: I0128 15:40:22.787346 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-db-create-fxmm7_d555b18f-0774-4e4c-9b9d-10ee1335d432/mariadb-database-create/0.log" Jan 28 15:40:23 crc kubenswrapper[4893]: I0128 15:40:23.266251 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_378ab36a-3a2c-4a6d-836f-92eba12307fe/nova-kuttl-api-log/0.log" Jan 28 15:40:23 crc kubenswrapper[4893]: I0128 15:40:23.661915 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-cell-mapping-2bxvl_79f84931-160b-409c-bb0b-193fd8988158/nova-manage/0.log" Jan 28 15:40:24 crc kubenswrapper[4893]: I0128 15:40:24.079190 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_22539c1b-d8a1-4f7d-b202-b33f849a21b4/nova-kuttl-cell0-conductor-conductor/0.log" Jan 28 15:40:24 crc kubenswrapper[4893]: I0128 15:40:24.495161 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-db-sync-jr59w_e84c5ebf-c963-4acb-b64f-107efda9798d/nova-kuttl-cell0-conductor-db-sync/0.log" Jan 28 15:40:24 crc kubenswrapper[4893]: I0128 15:40:24.868983 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-delete-vz6mp_c1771f4b-e3fc-4a93-8a60-c9c53f248e02/nova-manage/5.log" Jan 28 15:40:25 crc kubenswrapper[4893]: I0128 15:40:25.247446 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-mapping-nbnnn_2b554e78-6b57-406d-8a05-0e2931db92b7/nova-manage/0.log" Jan 28 15:40:25 crc kubenswrapper[4893]: I0128 15:40:25.709884 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_f05773d2-58b3-4e11-9962-45502872c375/nova-kuttl-cell1-conductor-conductor/0.log" Jan 28 15:40:26 crc kubenswrapper[4893]: I0128 15:40:26.104144 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-db-sync-9kfvk_5788fc83-55a9-489b-b094-e6a36fe58124/nova-kuttl-cell1-conductor-db-sync/0.log" Jan 28 15:40:26 crc kubenswrapper[4893]: I0128 15:40:26.494701 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_90e30875-ed7b-4c7e-b8ed-3deb340cfd2b/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 28 15:40:26 crc kubenswrapper[4893]: I0128 15:40:26.948143 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_28ce2e6b-b04e-4d88-a01b-101d056e8137/nova-kuttl-metadata-log/0.log" Jan 28 15:40:27 crc kubenswrapper[4893]: I0128 15:40:27.401779 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_00c7d078-56fd-4f9a-a20a-5dc498625eb1/nova-kuttl-scheduler-scheduler/0.log" Jan 28 15:40:27 crc kubenswrapper[4893]: I0128 15:40:27.819920 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_1418afdb-10ec-4cb7-853d-d0f755621625/galera/0.log" Jan 28 15:40:27 crc kubenswrapper[4893]: I0128 15:40:27.892110 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:40:27 crc kubenswrapper[4893]: E0128 15:40:27.892442 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:40:28 crc kubenswrapper[4893]: I0128 15:40:28.244746 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_6cf03c71-da90-490d-8f3c-f5646a45b9d6/galera/0.log" Jan 28 15:40:28 crc kubenswrapper[4893]: I0128 15:40:28.627038 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_5a7bef9d-825c-491a-887c-651ea4b6ca59/openstackclient/0.log" Jan 28 15:40:29 crc kubenswrapper[4893]: I0128 15:40:29.045274 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-7dbd979f64-625pv_092a35ea-0d0f-4538-a702-fcf0a09e3683/placement-log/0.log" Jan 28 15:40:29 crc kubenswrapper[4893]: I0128 15:40:29.496967 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_dcd1c126-70b7-46e1-8226-bc7dc353ecdb/rabbitmq/0.log" Jan 28 15:40:29 crc kubenswrapper[4893]: I0128 15:40:29.946711 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_67b2b466-ebc4-41d8-8b96-a285eb0609f5/rabbitmq/0.log" Jan 28 15:40:30 crc kubenswrapper[4893]: I0128 15:40:30.371621 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_01a81616-675d-43ec-acb2-7a4541b96771/rabbitmq/0.log" Jan 28 15:40:33 crc kubenswrapper[4893]: I0128 15:40:33.462810 4893 scope.go:117] "RemoveContainer" containerID="19a3c0768eaf9982ac92395a5658756fe6092b2a770ba144c8d5a60ecbfa1dc8" Jan 28 15:40:33 crc kubenswrapper[4893]: I0128 15:40:33.483405 4893 scope.go:117] "RemoveContainer" containerID="415718ed6f6e6a41fa5c59c692e62f51cdce66287c25516a340a7a2505f0225d" Jan 28 15:40:33 crc kubenswrapper[4893]: I0128 15:40:33.502572 4893 scope.go:117] "RemoveContainer" containerID="5a287f5a2552088d8db9daae8af3d05ed2055d0302e5a00b0a88cb34d8341fec" Jan 28 15:40:33 crc kubenswrapper[4893]: I0128 15:40:33.540396 4893 scope.go:117] "RemoveContainer" containerID="9d0b87a181bb5d077f8e57d5ea94aeff4b083d323dbd0fcaa51ee125283292b1" Jan 28 15:40:35 crc kubenswrapper[4893]: I0128 15:40:35.722849 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:40:35 crc kubenswrapper[4893]: I0128 15:40:35.723394 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:40:38 crc kubenswrapper[4893]: I0128 15:40:38.892088 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:40:38 crc kubenswrapper[4893]: E0128 15:40:38.893222 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:40:40 crc kubenswrapper[4893]: I0128 15:40:40.748137 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-79v8l"] Jan 28 15:40:40 crc kubenswrapper[4893]: E0128 15:40:40.749483 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf686c4-d25f-4ff7-893f-80f579240547" containerName="extract-utilities" Jan 28 15:40:40 crc kubenswrapper[4893]: I0128 15:40:40.749506 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf686c4-d25f-4ff7-893f-80f579240547" containerName="extract-utilities" Jan 28 15:40:40 crc kubenswrapper[4893]: E0128 15:40:40.749532 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf686c4-d25f-4ff7-893f-80f579240547" containerName="extract-content" Jan 28 15:40:40 crc kubenswrapper[4893]: I0128 15:40:40.749543 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf686c4-d25f-4ff7-893f-80f579240547" containerName="extract-content" Jan 28 15:40:40 crc kubenswrapper[4893]: E0128 15:40:40.749557 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf686c4-d25f-4ff7-893f-80f579240547" containerName="registry-server" Jan 28 15:40:40 crc kubenswrapper[4893]: I0128 15:40:40.749566 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf686c4-d25f-4ff7-893f-80f579240547" containerName="registry-server" Jan 28 15:40:40 crc kubenswrapper[4893]: I0128 15:40:40.749771 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cf686c4-d25f-4ff7-893f-80f579240547" containerName="registry-server" Jan 28 15:40:40 crc kubenswrapper[4893]: I0128 15:40:40.765906 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:40 crc kubenswrapper[4893]: I0128 15:40:40.786012 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-79v8l"] Jan 28 15:40:40 crc kubenswrapper[4893]: I0128 15:40:40.897399 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0225d599-95cb-4ef4-aff0-9eea05552449-utilities\") pod \"community-operators-79v8l\" (UID: \"0225d599-95cb-4ef4-aff0-9eea05552449\") " pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:40 crc kubenswrapper[4893]: I0128 15:40:40.897711 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0225d599-95cb-4ef4-aff0-9eea05552449-catalog-content\") pod \"community-operators-79v8l\" (UID: \"0225d599-95cb-4ef4-aff0-9eea05552449\") " pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:40 crc kubenswrapper[4893]: I0128 15:40:40.897861 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgm7x\" (UniqueName: \"kubernetes.io/projected/0225d599-95cb-4ef4-aff0-9eea05552449-kube-api-access-lgm7x\") pod \"community-operators-79v8l\" (UID: \"0225d599-95cb-4ef4-aff0-9eea05552449\") " pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:41 crc kubenswrapper[4893]: I0128 15:40:41.000030 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0225d599-95cb-4ef4-aff0-9eea05552449-utilities\") pod \"community-operators-79v8l\" (UID: \"0225d599-95cb-4ef4-aff0-9eea05552449\") " pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:41 crc kubenswrapper[4893]: I0128 15:40:41.000091 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0225d599-95cb-4ef4-aff0-9eea05552449-catalog-content\") pod \"community-operators-79v8l\" (UID: \"0225d599-95cb-4ef4-aff0-9eea05552449\") " pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:41 crc kubenswrapper[4893]: I0128 15:40:41.000108 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgm7x\" (UniqueName: \"kubernetes.io/projected/0225d599-95cb-4ef4-aff0-9eea05552449-kube-api-access-lgm7x\") pod \"community-operators-79v8l\" (UID: \"0225d599-95cb-4ef4-aff0-9eea05552449\") " pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:41 crc kubenswrapper[4893]: I0128 15:40:41.000892 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0225d599-95cb-4ef4-aff0-9eea05552449-utilities\") pod \"community-operators-79v8l\" (UID: \"0225d599-95cb-4ef4-aff0-9eea05552449\") " pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:41 crc kubenswrapper[4893]: I0128 15:40:41.000931 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0225d599-95cb-4ef4-aff0-9eea05552449-catalog-content\") pod \"community-operators-79v8l\" (UID: \"0225d599-95cb-4ef4-aff0-9eea05552449\") " pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:41 crc kubenswrapper[4893]: I0128 15:40:41.038532 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgm7x\" (UniqueName: \"kubernetes.io/projected/0225d599-95cb-4ef4-aff0-9eea05552449-kube-api-access-lgm7x\") pod \"community-operators-79v8l\" (UID: \"0225d599-95cb-4ef4-aff0-9eea05552449\") " pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:41 crc kubenswrapper[4893]: I0128 15:40:41.113748 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:41 crc kubenswrapper[4893]: I0128 15:40:41.653199 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-79v8l"] Jan 28 15:40:41 crc kubenswrapper[4893]: W0128 15:40:41.658652 4893 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0225d599_95cb_4ef4_aff0_9eea05552449.slice/crio-b56c0896c949787865c4bf03160b4f1c46231a3e98c7ff5a95470f62ed294961 WatchSource:0}: Error finding container b56c0896c949787865c4bf03160b4f1c46231a3e98c7ff5a95470f62ed294961: Status 404 returned error can't find the container with id b56c0896c949787865c4bf03160b4f1c46231a3e98c7ff5a95470f62ed294961 Jan 28 15:40:41 crc kubenswrapper[4893]: I0128 15:40:41.707778 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79v8l" event={"ID":"0225d599-95cb-4ef4-aff0-9eea05552449","Type":"ContainerStarted","Data":"b56c0896c949787865c4bf03160b4f1c46231a3e98c7ff5a95470f62ed294961"} Jan 28 15:40:42 crc kubenswrapper[4893]: I0128 15:40:42.716189 4893 generic.go:334] "Generic (PLEG): container finished" podID="0225d599-95cb-4ef4-aff0-9eea05552449" containerID="98a24d6216f72fab5bd16b699931bd01c6307042780e5d8c3f78e34eac84d699" exitCode=0 Jan 28 15:40:42 crc kubenswrapper[4893]: I0128 15:40:42.716441 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79v8l" event={"ID":"0225d599-95cb-4ef4-aff0-9eea05552449","Type":"ContainerDied","Data":"98a24d6216f72fab5bd16b699931bd01c6307042780e5d8c3f78e34eac84d699"} Jan 28 15:40:43 crc kubenswrapper[4893]: I0128 15:40:43.732212 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79v8l" event={"ID":"0225d599-95cb-4ef4-aff0-9eea05552449","Type":"ContainerStarted","Data":"ff8edfaec83416296af9e20502a8b8f4b6540bfcbf3f5d7cebc8979b3727aede"} Jan 28 15:40:44 crc kubenswrapper[4893]: I0128 15:40:44.746167 4893 generic.go:334] "Generic (PLEG): container finished" podID="0225d599-95cb-4ef4-aff0-9eea05552449" containerID="ff8edfaec83416296af9e20502a8b8f4b6540bfcbf3f5d7cebc8979b3727aede" exitCode=0 Jan 28 15:40:44 crc kubenswrapper[4893]: I0128 15:40:44.746266 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79v8l" event={"ID":"0225d599-95cb-4ef4-aff0-9eea05552449","Type":"ContainerDied","Data":"ff8edfaec83416296af9e20502a8b8f4b6540bfcbf3f5d7cebc8979b3727aede"} Jan 28 15:40:44 crc kubenswrapper[4893]: I0128 15:40:44.942511 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5zv7l"] Jan 28 15:40:44 crc kubenswrapper[4893]: I0128 15:40:44.944631 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:44 crc kubenswrapper[4893]: I0128 15:40:44.959000 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zv7l"] Jan 28 15:40:45 crc kubenswrapper[4893]: I0128 15:40:45.080113 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrgrq\" (UniqueName: \"kubernetes.io/projected/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-kube-api-access-vrgrq\") pod \"redhat-marketplace-5zv7l\" (UID: \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\") " pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:45 crc kubenswrapper[4893]: I0128 15:40:45.080187 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-utilities\") pod \"redhat-marketplace-5zv7l\" (UID: \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\") " pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:45 crc kubenswrapper[4893]: I0128 15:40:45.080238 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-catalog-content\") pod \"redhat-marketplace-5zv7l\" (UID: \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\") " pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:45 crc kubenswrapper[4893]: I0128 15:40:45.181710 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrgrq\" (UniqueName: \"kubernetes.io/projected/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-kube-api-access-vrgrq\") pod \"redhat-marketplace-5zv7l\" (UID: \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\") " pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:45 crc kubenswrapper[4893]: I0128 15:40:45.181797 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-utilities\") pod \"redhat-marketplace-5zv7l\" (UID: \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\") " pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:45 crc kubenswrapper[4893]: I0128 15:40:45.181855 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-catalog-content\") pod \"redhat-marketplace-5zv7l\" (UID: \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\") " pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:45 crc kubenswrapper[4893]: I0128 15:40:45.182978 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-utilities\") pod \"redhat-marketplace-5zv7l\" (UID: \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\") " pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:45 crc kubenswrapper[4893]: I0128 15:40:45.183034 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-catalog-content\") pod \"redhat-marketplace-5zv7l\" (UID: \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\") " pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:45 crc kubenswrapper[4893]: I0128 15:40:45.219172 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrgrq\" (UniqueName: \"kubernetes.io/projected/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-kube-api-access-vrgrq\") pod \"redhat-marketplace-5zv7l\" (UID: \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\") " pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:45 crc kubenswrapper[4893]: I0128 15:40:45.305107 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:45 crc kubenswrapper[4893]: I0128 15:40:45.624189 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zv7l"] Jan 28 15:40:45 crc kubenswrapper[4893]: I0128 15:40:45.756350 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zv7l" event={"ID":"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed","Type":"ContainerStarted","Data":"cc86cb1cead061c042a2be3d1104a6d3da678bbf9835d782bcb4724b8ac7976a"} Jan 28 15:40:45 crc kubenswrapper[4893]: I0128 15:40:45.760307 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79v8l" event={"ID":"0225d599-95cb-4ef4-aff0-9eea05552449","Type":"ContainerStarted","Data":"a04d8117a99f4ed9ea32f221f53b206b505501c84919f392141caae60e4a89e9"} Jan 28 15:40:45 crc kubenswrapper[4893]: I0128 15:40:45.784783 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-79v8l" podStartSLOduration=3.232691038 podStartE2EDuration="5.78475333s" podCreationTimestamp="2026-01-28 15:40:40 +0000 UTC" firstStartedPulling="2026-01-28 15:40:42.717806428 +0000 UTC m=+2360.491421456" lastFinishedPulling="2026-01-28 15:40:45.26986872 +0000 UTC m=+2363.043483748" observedRunningTime="2026-01-28 15:40:45.784494143 +0000 UTC m=+2363.558109171" watchObservedRunningTime="2026-01-28 15:40:45.78475333 +0000 UTC m=+2363.558368358" Jan 28 15:40:46 crc kubenswrapper[4893]: I0128 15:40:46.769781 4893 generic.go:334] "Generic (PLEG): container finished" podID="4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" containerID="5a724e89c43f7b3f4e70bea41c993e63310bf22c29a7cc01151d88530345dcf0" exitCode=0 Jan 28 15:40:46 crc kubenswrapper[4893]: I0128 15:40:46.769826 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zv7l" event={"ID":"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed","Type":"ContainerDied","Data":"5a724e89c43f7b3f4e70bea41c993e63310bf22c29a7cc01151d88530345dcf0"} Jan 28 15:40:47 crc kubenswrapper[4893]: I0128 15:40:47.780527 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zv7l" event={"ID":"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed","Type":"ContainerStarted","Data":"57e7cc6df17721a77c3a8ec4d8923dff62ae46715ac033f60b5e714713cdd953"} Jan 28 15:40:48 crc kubenswrapper[4893]: I0128 15:40:48.792654 4893 generic.go:334] "Generic (PLEG): container finished" podID="4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" containerID="57e7cc6df17721a77c3a8ec4d8923dff62ae46715ac033f60b5e714713cdd953" exitCode=0 Jan 28 15:40:48 crc kubenswrapper[4893]: I0128 15:40:48.792721 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zv7l" event={"ID":"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed","Type":"ContainerDied","Data":"57e7cc6df17721a77c3a8ec4d8923dff62ae46715ac033f60b5e714713cdd953"} Jan 28 15:40:49 crc kubenswrapper[4893]: I0128 15:40:49.804027 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zv7l" event={"ID":"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed","Type":"ContainerStarted","Data":"6017404cf515ca32650ac0e29ef0c5eb9585421ca7666991f18f4780e0e7c497"} Jan 28 15:40:49 crc kubenswrapper[4893]: I0128 15:40:49.835110 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5zv7l" podStartSLOduration=3.10511137 podStartE2EDuration="5.835088257s" podCreationTimestamp="2026-01-28 15:40:44 +0000 UTC" firstStartedPulling="2026-01-28 15:40:46.771880847 +0000 UTC m=+2364.545495875" lastFinishedPulling="2026-01-28 15:40:49.501857734 +0000 UTC m=+2367.275472762" observedRunningTime="2026-01-28 15:40:49.82309142 +0000 UTC m=+2367.596706478" watchObservedRunningTime="2026-01-28 15:40:49.835088257 +0000 UTC m=+2367.608703285" Jan 28 15:40:49 crc kubenswrapper[4893]: I0128 15:40:49.891890 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:40:49 crc kubenswrapper[4893]: E0128 15:40:49.892252 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:40:51 crc kubenswrapper[4893]: I0128 15:40:51.114544 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:51 crc kubenswrapper[4893]: I0128 15:40:51.114870 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:51 crc kubenswrapper[4893]: I0128 15:40:51.164449 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:51 crc kubenswrapper[4893]: I0128 15:40:51.863400 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:55 crc kubenswrapper[4893]: I0128 15:40:55.306452 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:55 crc kubenswrapper[4893]: I0128 15:40:55.307118 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:55 crc kubenswrapper[4893]: I0128 15:40:55.388429 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:55 crc kubenswrapper[4893]: I0128 15:40:55.546549 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-79v8l"] Jan 28 15:40:55 crc kubenswrapper[4893]: I0128 15:40:55.546830 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-79v8l" podUID="0225d599-95cb-4ef4-aff0-9eea05552449" containerName="registry-server" containerID="cri-o://a04d8117a99f4ed9ea32f221f53b206b505501c84919f392141caae60e4a89e9" gracePeriod=2 Jan 28 15:40:55 crc kubenswrapper[4893]: I0128 15:40:55.857303 4893 generic.go:334] "Generic (PLEG): container finished" podID="0225d599-95cb-4ef4-aff0-9eea05552449" containerID="a04d8117a99f4ed9ea32f221f53b206b505501c84919f392141caae60e4a89e9" exitCode=0 Jan 28 15:40:55 crc kubenswrapper[4893]: I0128 15:40:55.857528 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79v8l" event={"ID":"0225d599-95cb-4ef4-aff0-9eea05552449","Type":"ContainerDied","Data":"a04d8117a99f4ed9ea32f221f53b206b505501c84919f392141caae60e4a89e9"} Jan 28 15:40:55 crc kubenswrapper[4893]: I0128 15:40:55.905384 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:40:55 crc kubenswrapper[4893]: I0128 15:40:55.981793 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.111764 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgm7x\" (UniqueName: \"kubernetes.io/projected/0225d599-95cb-4ef4-aff0-9eea05552449-kube-api-access-lgm7x\") pod \"0225d599-95cb-4ef4-aff0-9eea05552449\" (UID: \"0225d599-95cb-4ef4-aff0-9eea05552449\") " Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.111943 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0225d599-95cb-4ef4-aff0-9eea05552449-utilities\") pod \"0225d599-95cb-4ef4-aff0-9eea05552449\" (UID: \"0225d599-95cb-4ef4-aff0-9eea05552449\") " Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.112273 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0225d599-95cb-4ef4-aff0-9eea05552449-catalog-content\") pod \"0225d599-95cb-4ef4-aff0-9eea05552449\" (UID: \"0225d599-95cb-4ef4-aff0-9eea05552449\") " Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.113263 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0225d599-95cb-4ef4-aff0-9eea05552449-utilities" (OuterVolumeSpecName: "utilities") pod "0225d599-95cb-4ef4-aff0-9eea05552449" (UID: "0225d599-95cb-4ef4-aff0-9eea05552449"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.117708 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0225d599-95cb-4ef4-aff0-9eea05552449-kube-api-access-lgm7x" (OuterVolumeSpecName: "kube-api-access-lgm7x") pod "0225d599-95cb-4ef4-aff0-9eea05552449" (UID: "0225d599-95cb-4ef4-aff0-9eea05552449"). InnerVolumeSpecName "kube-api-access-lgm7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.164868 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0225d599-95cb-4ef4-aff0-9eea05552449-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0225d599-95cb-4ef4-aff0-9eea05552449" (UID: "0225d599-95cb-4ef4-aff0-9eea05552449"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.214715 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0225d599-95cb-4ef4-aff0-9eea05552449-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.214750 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0225d599-95cb-4ef4-aff0-9eea05552449-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.214761 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgm7x\" (UniqueName: \"kubernetes.io/projected/0225d599-95cb-4ef4-aff0-9eea05552449-kube-api-access-lgm7x\") on node \"crc\" DevicePath \"\"" Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.874750 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79v8l" event={"ID":"0225d599-95cb-4ef4-aff0-9eea05552449","Type":"ContainerDied","Data":"b56c0896c949787865c4bf03160b4f1c46231a3e98c7ff5a95470f62ed294961"} Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.874779 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79v8l" Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.875125 4893 scope.go:117] "RemoveContainer" containerID="a04d8117a99f4ed9ea32f221f53b206b505501c84919f392141caae60e4a89e9" Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.902991 4893 scope.go:117] "RemoveContainer" containerID="ff8edfaec83416296af9e20502a8b8f4b6540bfcbf3f5d7cebc8979b3727aede" Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.921776 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-79v8l"] Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.933745 4893 scope.go:117] "RemoveContainer" containerID="98a24d6216f72fab5bd16b699931bd01c6307042780e5d8c3f78e34eac84d699" Jan 28 15:40:56 crc kubenswrapper[4893]: I0128 15:40:56.934188 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-79v8l"] Jan 28 15:40:58 crc kubenswrapper[4893]: I0128 15:40:58.905582 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0225d599-95cb-4ef4-aff0-9eea05552449" path="/var/lib/kubelet/pods/0225d599-95cb-4ef4-aff0-9eea05552449/volumes" Jan 28 15:40:59 crc kubenswrapper[4893]: I0128 15:40:59.794144 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg_8cb55e6c-bd6a-496e-a2bd-85b72cfb8146/extract/0.log" Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.142642 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zv7l"] Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.142873 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5zv7l" podUID="4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" containerName="registry-server" containerID="cri-o://6017404cf515ca32650ac0e29ef0c5eb9585421ca7666991f18f4780e0e7c497" gracePeriod=2 Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.169725 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l_e0602f55-847f-4987-ba4c-9aa5fb47ad7d/extract/0.log" Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.547045 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.565283 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-p6nxj_c2188ba2-ad62-4873-abfe-fa7ad88b57a6/manager/0.log" Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.689235 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrgrq\" (UniqueName: \"kubernetes.io/projected/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-kube-api-access-vrgrq\") pod \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\" (UID: \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\") " Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.689342 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-catalog-content\") pod \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\" (UID: \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\") " Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.689781 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-utilities\") pod \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\" (UID: \"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed\") " Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.690697 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-utilities" (OuterVolumeSpecName: "utilities") pod "4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" (UID: "4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.695054 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-kube-api-access-vrgrq" (OuterVolumeSpecName: "kube-api-access-vrgrq") pod "4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" (UID: "4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed"). InnerVolumeSpecName "kube-api-access-vrgrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.712600 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" (UID: "4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.792160 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.792193 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrgrq\" (UniqueName: \"kubernetes.io/projected/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-kube-api-access-vrgrq\") on node \"crc\" DevicePath \"\"" Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.792204 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.920809 4893 generic.go:334] "Generic (PLEG): container finished" podID="4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" containerID="6017404cf515ca32650ac0e29ef0c5eb9585421ca7666991f18f4780e0e7c497" exitCode=0 Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.920862 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zv7l" event={"ID":"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed","Type":"ContainerDied","Data":"6017404cf515ca32650ac0e29ef0c5eb9585421ca7666991f18f4780e0e7c497"} Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.920995 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5zv7l" event={"ID":"4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed","Type":"ContainerDied","Data":"cc86cb1cead061c042a2be3d1104a6d3da678bbf9835d782bcb4724b8ac7976a"} Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.921025 4893 scope.go:117] "RemoveContainer" containerID="6017404cf515ca32650ac0e29ef0c5eb9585421ca7666991f18f4780e0e7c497" Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.921528 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5zv7l" Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.952701 4893 scope.go:117] "RemoveContainer" containerID="57e7cc6df17721a77c3a8ec4d8923dff62ae46715ac033f60b5e714713cdd953" Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.963720 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zv7l"] Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.970807 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-vdcjn_72d2e324-70de-4019-9673-0a86620ca028/manager/0.log" Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.991305 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5zv7l"] Jan 28 15:41:00 crc kubenswrapper[4893]: I0128 15:41:00.992051 4893 scope.go:117] "RemoveContainer" containerID="5a724e89c43f7b3f4e70bea41c993e63310bf22c29a7cc01151d88530345dcf0" Jan 28 15:41:01 crc kubenswrapper[4893]: I0128 15:41:01.028731 4893 scope.go:117] "RemoveContainer" containerID="6017404cf515ca32650ac0e29ef0c5eb9585421ca7666991f18f4780e0e7c497" Jan 28 15:41:01 crc kubenswrapper[4893]: E0128 15:41:01.029183 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6017404cf515ca32650ac0e29ef0c5eb9585421ca7666991f18f4780e0e7c497\": container with ID starting with 6017404cf515ca32650ac0e29ef0c5eb9585421ca7666991f18f4780e0e7c497 not found: ID does not exist" containerID="6017404cf515ca32650ac0e29ef0c5eb9585421ca7666991f18f4780e0e7c497" Jan 28 15:41:01 crc kubenswrapper[4893]: I0128 15:41:01.029224 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6017404cf515ca32650ac0e29ef0c5eb9585421ca7666991f18f4780e0e7c497"} err="failed to get container status \"6017404cf515ca32650ac0e29ef0c5eb9585421ca7666991f18f4780e0e7c497\": rpc error: code = NotFound desc = could not find container \"6017404cf515ca32650ac0e29ef0c5eb9585421ca7666991f18f4780e0e7c497\": container with ID starting with 6017404cf515ca32650ac0e29ef0c5eb9585421ca7666991f18f4780e0e7c497 not found: ID does not exist" Jan 28 15:41:01 crc kubenswrapper[4893]: I0128 15:41:01.029249 4893 scope.go:117] "RemoveContainer" containerID="57e7cc6df17721a77c3a8ec4d8923dff62ae46715ac033f60b5e714713cdd953" Jan 28 15:41:01 crc kubenswrapper[4893]: E0128 15:41:01.029712 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57e7cc6df17721a77c3a8ec4d8923dff62ae46715ac033f60b5e714713cdd953\": container with ID starting with 57e7cc6df17721a77c3a8ec4d8923dff62ae46715ac033f60b5e714713cdd953 not found: ID does not exist" containerID="57e7cc6df17721a77c3a8ec4d8923dff62ae46715ac033f60b5e714713cdd953" Jan 28 15:41:01 crc kubenswrapper[4893]: I0128 15:41:01.029741 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57e7cc6df17721a77c3a8ec4d8923dff62ae46715ac033f60b5e714713cdd953"} err="failed to get container status \"57e7cc6df17721a77c3a8ec4d8923dff62ae46715ac033f60b5e714713cdd953\": rpc error: code = NotFound desc = could not find container \"57e7cc6df17721a77c3a8ec4d8923dff62ae46715ac033f60b5e714713cdd953\": container with ID starting with 57e7cc6df17721a77c3a8ec4d8923dff62ae46715ac033f60b5e714713cdd953 not found: ID does not exist" Jan 28 15:41:01 crc kubenswrapper[4893]: I0128 15:41:01.029760 4893 scope.go:117] "RemoveContainer" containerID="5a724e89c43f7b3f4e70bea41c993e63310bf22c29a7cc01151d88530345dcf0" Jan 28 15:41:01 crc kubenswrapper[4893]: E0128 15:41:01.030264 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a724e89c43f7b3f4e70bea41c993e63310bf22c29a7cc01151d88530345dcf0\": container with ID starting with 5a724e89c43f7b3f4e70bea41c993e63310bf22c29a7cc01151d88530345dcf0 not found: ID does not exist" containerID="5a724e89c43f7b3f4e70bea41c993e63310bf22c29a7cc01151d88530345dcf0" Jan 28 15:41:01 crc kubenswrapper[4893]: I0128 15:41:01.030318 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a724e89c43f7b3f4e70bea41c993e63310bf22c29a7cc01151d88530345dcf0"} err="failed to get container status \"5a724e89c43f7b3f4e70bea41c993e63310bf22c29a7cc01151d88530345dcf0\": rpc error: code = NotFound desc = could not find container \"5a724e89c43f7b3f4e70bea41c993e63310bf22c29a7cc01151d88530345dcf0\": container with ID starting with 5a724e89c43f7b3f4e70bea41c993e63310bf22c29a7cc01151d88530345dcf0 not found: ID does not exist" Jan 28 15:41:01 crc kubenswrapper[4893]: I0128 15:41:01.359527 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-jnhg7_17019a37-b628-4464-b037-470c2be80308/manager/0.log" Jan 28 15:41:01 crc kubenswrapper[4893]: I0128 15:41:01.745817 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-dlrsm_4179ac2f-dd41-4cd3-8558-6daba8252582/manager/0.log" Jan 28 15:41:02 crc kubenswrapper[4893]: I0128 15:41:02.093928 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-j8x44_0e525c35-621a-43f8-a8c6-9a472607373d/manager/0.log" Jan 28 15:41:02 crc kubenswrapper[4893]: I0128 15:41:02.440996 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-dqldg_0dcd4cb9-92c5-4fb0-9718-79fe6b7d2cea/manager/0.log" Jan 28 15:41:02 crc kubenswrapper[4893]: I0128 15:41:02.904570 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" path="/var/lib/kubelet/pods/4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed/volumes" Jan 28 15:41:02 crc kubenswrapper[4893]: I0128 15:41:02.941562 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-rg997_1a360ec7-efa3-4972-a655-3e21de960aec/manager/0.log" Jan 28 15:41:03 crc kubenswrapper[4893]: I0128 15:41:03.337754 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-jfx6g_20c9ab96-9196-4834-b516-8d1c9564bf35/manager/0.log" Jan 28 15:41:03 crc kubenswrapper[4893]: I0128 15:41:03.793697 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-4rgm2_7740f64d-b660-493b-b3f5-1041a0ce3061/manager/0.log" Jan 28 15:41:04 crc kubenswrapper[4893]: I0128 15:41:04.194588 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-nd8rm_d578cfaa-0b09-476e-9cd0-abd3d6274bd7/manager/0.log" Jan 28 15:41:04 crc kubenswrapper[4893]: I0128 15:41:04.620716 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-qbfns_a5872ed3-9a06-4bd2-b592-b42c548a1db4/manager/0.log" Jan 28 15:41:04 crc kubenswrapper[4893]: I0128 15:41:04.892598 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:41:04 crc kubenswrapper[4893]: E0128 15:41:04.892830 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:41:04 crc kubenswrapper[4893]: I0128 15:41:04.985159 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-2qgj6_e1e458d4-37a1-4111-9e2d-fa49cbdd9e08/manager/0.log" Jan 28 15:41:05 crc kubenswrapper[4893]: I0128 15:41:05.721926 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:41:05 crc kubenswrapper[4893]: I0128 15:41:05.722312 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:41:05 crc kubenswrapper[4893]: I0128 15:41:05.722366 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:41:05 crc kubenswrapper[4893]: I0128 15:41:05.723025 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01"} pod="openshift-machine-config-operator/machine-config-daemon-l2nht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:41:05 crc kubenswrapper[4893]: I0128 15:41:05.723078 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" containerID="cri-o://cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" gracePeriod=600 Jan 28 15:41:05 crc kubenswrapper[4893]: I0128 15:41:05.742895 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-78947fbfb8-7gj7q_6f1e8a13-7c32-4990-b658-0985329d5811/manager/0.log" Jan 28 15:41:05 crc kubenswrapper[4893]: E0128 15:41:05.845444 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:41:05 crc kubenswrapper[4893]: I0128 15:41:05.971339 4893 generic.go:334] "Generic (PLEG): container finished" podID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" exitCode=0 Jan 28 15:41:05 crc kubenswrapper[4893]: I0128 15:41:05.971386 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerDied","Data":"cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01"} Jan 28 15:41:05 crc kubenswrapper[4893]: I0128 15:41:05.971418 4893 scope.go:117] "RemoveContainer" containerID="d1e89d3b5214b1e2076651a4fdac0f9f4db53c16fe20d6f51f420c4a7e4e5bf5" Jan 28 15:41:05 crc kubenswrapper[4893]: I0128 15:41:05.972068 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:41:05 crc kubenswrapper[4893]: E0128 15:41:05.972496 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:41:06 crc kubenswrapper[4893]: I0128 15:41:06.134833 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-99pss_04786631-a21b-4006-ab43-c98ac66a34cb/registry-server/0.log" Jan 28 15:41:06 crc kubenswrapper[4893]: I0128 15:41:06.518313 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-b6cft_379dbcd5-96e3-4563-ac73-7264f4b90d68/manager/0.log" Jan 28 15:41:06 crc kubenswrapper[4893]: I0128 15:41:06.910221 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt_bfe9e7f0-b5aa-48a6-9487-e1765752c644/manager/0.log" Jan 28 15:41:07 crc kubenswrapper[4893]: I0128 15:41:07.583132 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5fd66b5d9c-j5x2h_24fb3958-2b40-4b9d-90ee-591dafc3987e/manager/0.log" Jan 28 15:41:07 crc kubenswrapper[4893]: I0128 15:41:07.974439 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8dszf_1e49f4d1-1856-44a5-91a5-86833c5e9e0c/registry-server/0.log" Jan 28 15:41:08 crc kubenswrapper[4893]: I0128 15:41:08.402028 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-b276g_b70555f3-c876-49fc-bd77-83efa82abac7/manager/0.log" Jan 28 15:41:08 crc kubenswrapper[4893]: I0128 15:41:08.811750 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-ld4p5_9a867ab9-ad43-409c-9d85-0ef229c5e25f/manager/0.log" Jan 28 15:41:09 crc kubenswrapper[4893]: I0128 15:41:09.199409 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-njb2l_d2a88a4d-0cb7-40fd-8e25-74e67785af15/operator/0.log" Jan 28 15:41:09 crc kubenswrapper[4893]: I0128 15:41:09.577728 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-bnr2s_f1bf10ee-2d99-4b1b-ab99-ae2066b96522/manager/0.log" Jan 28 15:41:09 crc kubenswrapper[4893]: I0128 15:41:09.997302 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-bsh7f_2dee9e4e-11c8-4db6-a457-6f7bbf047f70/manager/0.log" Jan 28 15:41:10 crc kubenswrapper[4893]: I0128 15:41:10.370852 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-zjrm8_651741dd-f535-40e3-ba34-96b9ce51cf6a/manager/0.log" Jan 28 15:41:10 crc kubenswrapper[4893]: I0128 15:41:10.714188 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-q9t8p_9f55f343-0f75-4fed-ab7b-71c8dddd4af3/manager/0.log" Jan 28 15:41:16 crc kubenswrapper[4893]: I0128 15:41:16.010234 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-86bf966444-cll8k_a79fa730-be33-48f7-9ef0-7964e2afbede/keystone-api/0.log" Jan 28 15:41:16 crc kubenswrapper[4893]: I0128 15:41:16.891443 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:41:16 crc kubenswrapper[4893]: E0128 15:41:16.891756 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-vz6mp_nova-kuttl-default(c1771f4b-e3fc-4a93-8a60-c9c53f248e02)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" Jan 28 15:41:18 crc kubenswrapper[4893]: I0128 15:41:18.667137 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_4e4c3f33-0d15-4434-9940-21a310e1e272/memcached/0.log" Jan 28 15:41:19 crc kubenswrapper[4893]: I0128 15:41:19.203146 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-a703-account-create-update-r88t7_25b927e3-d3f5-4343-af70-bc2eb39a539c/mariadb-account-create-update/0.log" Jan 28 15:41:19 crc kubenswrapper[4893]: I0128 15:41:19.712254 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-db-create-hc6tn_4bb0a658-a2dc-4442-a362-e1a6fd576848/mariadb-database-create/0.log" Jan 28 15:41:19 crc kubenswrapper[4893]: I0128 15:41:19.892508 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:41:19 crc kubenswrapper[4893]: E0128 15:41:19.892762 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:41:20 crc kubenswrapper[4893]: I0128 15:41:20.209646 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-9d70-account-create-update-97bqx_51a939d5-f485-40b5-bc7b-05d3e063db83/mariadb-account-create-update/0.log" Jan 28 15:41:20 crc kubenswrapper[4893]: I0128 15:41:20.722359 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-db-create-vxgfw_5bf5b624-d148-4c17-8824-77512ecaadba/mariadb-database-create/0.log" Jan 28 15:41:21 crc kubenswrapper[4893]: I0128 15:41:21.205079 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-2799-account-create-update-rspsx_4549a6c6-f4a4-463a-8b6e-2a0d7edeae42/mariadb-account-create-update/0.log" Jan 28 15:41:21 crc kubenswrapper[4893]: I0128 15:41:21.718222 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-db-create-fxmm7_d555b18f-0774-4e4c-9b9d-10ee1335d432/mariadb-database-create/0.log" Jan 28 15:41:22 crc kubenswrapper[4893]: I0128 15:41:22.229503 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_378ab36a-3a2c-4a6d-836f-92eba12307fe/nova-kuttl-api-log/0.log" Jan 28 15:41:22 crc kubenswrapper[4893]: I0128 15:41:22.638731 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-cell-mapping-2bxvl_79f84931-160b-409c-bb0b-193fd8988158/nova-manage/0.log" Jan 28 15:41:23 crc kubenswrapper[4893]: I0128 15:41:23.114357 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_22539c1b-d8a1-4f7d-b202-b33f849a21b4/nova-kuttl-cell0-conductor-conductor/0.log" Jan 28 15:41:23 crc kubenswrapper[4893]: I0128 15:41:23.523628 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-db-sync-jr59w_e84c5ebf-c963-4acb-b64f-107efda9798d/nova-kuttl-cell0-conductor-db-sync/0.log" Jan 28 15:41:23 crc kubenswrapper[4893]: I0128 15:41:23.897824 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-delete-vz6mp_c1771f4b-e3fc-4a93-8a60-c9c53f248e02/nova-manage/5.log" Jan 28 15:41:24 crc kubenswrapper[4893]: I0128 15:41:24.315669 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-mapping-nbnnn_2b554e78-6b57-406d-8a05-0e2931db92b7/nova-manage/0.log" Jan 28 15:41:24 crc kubenswrapper[4893]: I0128 15:41:24.742602 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_f05773d2-58b3-4e11-9962-45502872c375/nova-kuttl-cell1-conductor-conductor/0.log" Jan 28 15:41:25 crc kubenswrapper[4893]: I0128 15:41:25.126875 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-db-sync-9kfvk_5788fc83-55a9-489b-b094-e6a36fe58124/nova-kuttl-cell1-conductor-db-sync/0.log" Jan 28 15:41:25 crc kubenswrapper[4893]: I0128 15:41:25.541355 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_90e30875-ed7b-4c7e-b8ed-3deb340cfd2b/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 28 15:41:26 crc kubenswrapper[4893]: I0128 15:41:26.010232 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_28ce2e6b-b04e-4d88-a01b-101d056e8137/nova-kuttl-metadata-log/0.log" Jan 28 15:41:26 crc kubenswrapper[4893]: I0128 15:41:26.452372 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_00c7d078-56fd-4f9a-a20a-5dc498625eb1/nova-kuttl-scheduler-scheduler/0.log" Jan 28 15:41:26 crc kubenswrapper[4893]: I0128 15:41:26.836620 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_1418afdb-10ec-4cb7-853d-d0f755621625/galera/0.log" Jan 28 15:41:27 crc kubenswrapper[4893]: I0128 15:41:27.278896 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_6cf03c71-da90-490d-8f3c-f5646a45b9d6/galera/0.log" Jan 28 15:41:27 crc kubenswrapper[4893]: I0128 15:41:27.667248 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_5a7bef9d-825c-491a-887c-651ea4b6ca59/openstackclient/0.log" Jan 28 15:41:28 crc kubenswrapper[4893]: I0128 15:41:28.081049 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-7dbd979f64-625pv_092a35ea-0d0f-4538-a702-fcf0a09e3683/placement-log/0.log" Jan 28 15:41:28 crc kubenswrapper[4893]: I0128 15:41:28.489011 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_dcd1c126-70b7-46e1-8226-bc7dc353ecdb/rabbitmq/0.log" Jan 28 15:41:28 crc kubenswrapper[4893]: I0128 15:41:28.902520 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_67b2b466-ebc4-41d8-8b96-a285eb0609f5/rabbitmq/0.log" Jan 28 15:41:29 crc kubenswrapper[4893]: I0128 15:41:29.318741 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_01a81616-675d-43ec-acb2-7a4541b96771/rabbitmq/0.log" Jan 28 15:41:29 crc kubenswrapper[4893]: I0128 15:41:29.891330 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:41:30 crc kubenswrapper[4893]: I0128 15:41:30.170834 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerStarted","Data":"846009a7906affaeb60b98fdebb8d117c85906bb3791d47a12d30b8a98576949"} Jan 28 15:41:31 crc kubenswrapper[4893]: I0128 15:41:31.204156 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp"] Jan 28 15:41:31 crc kubenswrapper[4893]: I0128 15:41:31.205853 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" containerID="cri-o://846009a7906affaeb60b98fdebb8d117c85906bb3791d47a12d30b8a98576949" gracePeriod=30 Jan 28 15:41:34 crc kubenswrapper[4893]: I0128 15:41:34.891796 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:41:34 crc kubenswrapper[4893]: E0128 15:41:34.892665 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:41:35 crc kubenswrapper[4893]: I0128 15:41:35.230761 4893 generic.go:334] "Generic (PLEG): container finished" podID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerID="846009a7906affaeb60b98fdebb8d117c85906bb3791d47a12d30b8a98576949" exitCode=2 Jan 28 15:41:35 crc kubenswrapper[4893]: I0128 15:41:35.230861 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerDied","Data":"846009a7906affaeb60b98fdebb8d117c85906bb3791d47a12d30b8a98576949"} Jan 28 15:41:35 crc kubenswrapper[4893]: I0128 15:41:35.230948 4893 scope.go:117] "RemoveContainer" containerID="7d9081146070c30cd3beaf47d75cd91803a7a4d6cb88172e02e66af988619855" Jan 28 15:41:35 crc kubenswrapper[4893]: I0128 15:41:35.554443 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" Jan 28 15:41:35 crc kubenswrapper[4893]: I0128 15:41:35.736815 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-config-data\") pod \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\" (UID: \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\") " Jan 28 15:41:35 crc kubenswrapper[4893]: I0128 15:41:35.736907 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-scripts\") pod \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\" (UID: \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\") " Jan 28 15:41:35 crc kubenswrapper[4893]: I0128 15:41:35.737064 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m85vc\" (UniqueName: \"kubernetes.io/projected/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-kube-api-access-m85vc\") pod \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\" (UID: \"c1771f4b-e3fc-4a93-8a60-c9c53f248e02\") " Jan 28 15:41:35 crc kubenswrapper[4893]: I0128 15:41:35.743538 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-scripts" (OuterVolumeSpecName: "scripts") pod "c1771f4b-e3fc-4a93-8a60-c9c53f248e02" (UID: "c1771f4b-e3fc-4a93-8a60-c9c53f248e02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:41:35 crc kubenswrapper[4893]: I0128 15:41:35.750671 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-kube-api-access-m85vc" (OuterVolumeSpecName: "kube-api-access-m85vc") pod "c1771f4b-e3fc-4a93-8a60-c9c53f248e02" (UID: "c1771f4b-e3fc-4a93-8a60-c9c53f248e02"). InnerVolumeSpecName "kube-api-access-m85vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:41:35 crc kubenswrapper[4893]: I0128 15:41:35.763691 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-config-data" (OuterVolumeSpecName: "config-data") pod "c1771f4b-e3fc-4a93-8a60-c9c53f248e02" (UID: "c1771f4b-e3fc-4a93-8a60-c9c53f248e02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:41:35 crc kubenswrapper[4893]: I0128 15:41:35.839817 4893 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:41:35 crc kubenswrapper[4893]: I0128 15:41:35.839885 4893 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:41:35 crc kubenswrapper[4893]: I0128 15:41:35.839906 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m85vc\" (UniqueName: \"kubernetes.io/projected/c1771f4b-e3fc-4a93-8a60-c9c53f248e02-kube-api-access-m85vc\") on node \"crc\" DevicePath \"\"" Jan 28 15:41:36 crc kubenswrapper[4893]: I0128 15:41:36.240579 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" event={"ID":"c1771f4b-e3fc-4a93-8a60-c9c53f248e02","Type":"ContainerDied","Data":"9b7200dd24bdbad8fc773016cd805e1c0940f154008522d1a6fc6f90a3758134"} Jan 28 15:41:36 crc kubenswrapper[4893]: I0128 15:41:36.241139 4893 scope.go:117] "RemoveContainer" containerID="846009a7906affaeb60b98fdebb8d117c85906bb3791d47a12d30b8a98576949" Jan 28 15:41:36 crc kubenswrapper[4893]: I0128 15:41:36.240676 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp" Jan 28 15:41:36 crc kubenswrapper[4893]: I0128 15:41:36.297004 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp"] Jan 28 15:41:36 crc kubenswrapper[4893]: I0128 15:41:36.304655 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-vz6mp"] Jan 28 15:41:36 crc kubenswrapper[4893]: I0128 15:41:36.901157 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" path="/var/lib/kubelet/pods/c1771f4b-e3fc-4a93-8a60-c9c53f248e02/volumes" Jan 28 15:41:47 crc kubenswrapper[4893]: I0128 15:41:47.892154 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:41:47 crc kubenswrapper[4893]: E0128 15:41:47.892611 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:41:59 crc kubenswrapper[4893]: I0128 15:41:59.025426 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg_8cb55e6c-bd6a-496e-a2bd-85b72cfb8146/extract/0.log" Jan 28 15:41:59 crc kubenswrapper[4893]: I0128 15:41:59.420626 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l_e0602f55-847f-4987-ba4c-9aa5fb47ad7d/extract/0.log" Jan 28 15:41:59 crc kubenswrapper[4893]: I0128 15:41:59.807149 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-p6nxj_c2188ba2-ad62-4873-abfe-fa7ad88b57a6/manager/0.log" Jan 28 15:42:00 crc kubenswrapper[4893]: I0128 15:42:00.228302 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-vdcjn_72d2e324-70de-4019-9673-0a86620ca028/manager/0.log" Jan 28 15:42:00 crc kubenswrapper[4893]: I0128 15:42:00.638517 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-jnhg7_17019a37-b628-4464-b037-470c2be80308/manager/0.log" Jan 28 15:42:00 crc kubenswrapper[4893]: I0128 15:42:00.891968 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:42:00 crc kubenswrapper[4893]: E0128 15:42:00.892226 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:42:01 crc kubenswrapper[4893]: I0128 15:42:01.038517 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-dlrsm_4179ac2f-dd41-4cd3-8558-6daba8252582/manager/0.log" Jan 28 15:42:01 crc kubenswrapper[4893]: I0128 15:42:01.444852 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-j8x44_0e525c35-621a-43f8-a8c6-9a472607373d/manager/0.log" Jan 28 15:42:01 crc kubenswrapper[4893]: I0128 15:42:01.864542 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-dqldg_0dcd4cb9-92c5-4fb0-9718-79fe6b7d2cea/manager/0.log" Jan 28 15:42:02 crc kubenswrapper[4893]: I0128 15:42:02.382660 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-rg997_1a360ec7-efa3-4972-a655-3e21de960aec/manager/0.log" Jan 28 15:42:02 crc kubenswrapper[4893]: I0128 15:42:02.772946 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-jfx6g_20c9ab96-9196-4834-b516-8d1c9564bf35/manager/0.log" Jan 28 15:42:03 crc kubenswrapper[4893]: I0128 15:42:03.276700 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-4rgm2_7740f64d-b660-493b-b3f5-1041a0ce3061/manager/0.log" Jan 28 15:42:03 crc kubenswrapper[4893]: I0128 15:42:03.682237 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-nd8rm_d578cfaa-0b09-476e-9cd0-abd3d6274bd7/manager/0.log" Jan 28 15:42:04 crc kubenswrapper[4893]: I0128 15:42:04.121020 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-qbfns_a5872ed3-9a06-4bd2-b592-b42c548a1db4/manager/0.log" Jan 28 15:42:04 crc kubenswrapper[4893]: I0128 15:42:04.550183 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-2qgj6_e1e458d4-37a1-4111-9e2d-fa49cbdd9e08/manager/0.log" Jan 28 15:42:05 crc kubenswrapper[4893]: I0128 15:42:05.344280 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-78947fbfb8-7gj7q_6f1e8a13-7c32-4990-b658-0985329d5811/manager/0.log" Jan 28 15:42:05 crc kubenswrapper[4893]: I0128 15:42:05.745644 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-99pss_04786631-a21b-4006-ab43-c98ac66a34cb/registry-server/0.log" Jan 28 15:42:06 crc kubenswrapper[4893]: I0128 15:42:06.131930 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-b6cft_379dbcd5-96e3-4563-ac73-7264f4b90d68/manager/0.log" Jan 28 15:42:06 crc kubenswrapper[4893]: I0128 15:42:06.508460 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt_bfe9e7f0-b5aa-48a6-9487-e1765752c644/manager/0.log" Jan 28 15:42:07 crc kubenswrapper[4893]: I0128 15:42:07.177215 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5fd66b5d9c-j5x2h_24fb3958-2b40-4b9d-90ee-591dafc3987e/manager/0.log" Jan 28 15:42:07 crc kubenswrapper[4893]: I0128 15:42:07.548162 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8dszf_1e49f4d1-1856-44a5-91a5-86833c5e9e0c/registry-server/0.log" Jan 28 15:42:07 crc kubenswrapper[4893]: I0128 15:42:07.935001 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-b276g_b70555f3-c876-49fc-bd77-83efa82abac7/manager/0.log" Jan 28 15:42:08 crc kubenswrapper[4893]: I0128 15:42:08.387853 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-ld4p5_9a867ab9-ad43-409c-9d85-0ef229c5e25f/manager/0.log" Jan 28 15:42:08 crc kubenswrapper[4893]: I0128 15:42:08.763753 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-njb2l_d2a88a4d-0cb7-40fd-8e25-74e67785af15/operator/0.log" Jan 28 15:42:09 crc kubenswrapper[4893]: I0128 15:42:09.141057 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-bnr2s_f1bf10ee-2d99-4b1b-ab99-ae2066b96522/manager/0.log" Jan 28 15:42:09 crc kubenswrapper[4893]: I0128 15:42:09.519011 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-bsh7f_2dee9e4e-11c8-4db6-a457-6f7bbf047f70/manager/0.log" Jan 28 15:42:09 crc kubenswrapper[4893]: I0128 15:42:09.891844 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-zjrm8_651741dd-f535-40e3-ba34-96b9ce51cf6a/manager/0.log" Jan 28 15:42:10 crc kubenswrapper[4893]: I0128 15:42:10.257954 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-q9t8p_9f55f343-0f75-4fed-ab7b-71c8dddd4af3/manager/0.log" Jan 28 15:42:12 crc kubenswrapper[4893]: I0128 15:42:12.899638 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:42:12 crc kubenswrapper[4893]: E0128 15:42:12.900010 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:42:23 crc kubenswrapper[4893]: I0128 15:42:23.892551 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:42:23 crc kubenswrapper[4893]: E0128 15:42:23.893881 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.194950 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2b4d6/must-gather-wlf6l"] Jan 28 15:42:29 crc kubenswrapper[4893]: E0128 15:42:29.195943 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.195956 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: E0128 15:42:29.195971 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" containerName="extract-content" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.195978 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" containerName="extract-content" Jan 28 15:42:29 crc kubenswrapper[4893]: E0128 15:42:29.195992 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.195997 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: E0128 15:42:29.196009 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0225d599-95cb-4ef4-aff0-9eea05552449" containerName="registry-server" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196014 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0225d599-95cb-4ef4-aff0-9eea05552449" containerName="registry-server" Jan 28 15:42:29 crc kubenswrapper[4893]: E0128 15:42:29.196027 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196033 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: E0128 15:42:29.196043 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196049 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: E0128 15:42:29.196055 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0225d599-95cb-4ef4-aff0-9eea05552449" containerName="extract-content" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196061 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0225d599-95cb-4ef4-aff0-9eea05552449" containerName="extract-content" Jan 28 15:42:29 crc kubenswrapper[4893]: E0128 15:42:29.196074 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" containerName="registry-server" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196079 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" containerName="registry-server" Jan 28 15:42:29 crc kubenswrapper[4893]: E0128 15:42:29.196094 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0225d599-95cb-4ef4-aff0-9eea05552449" containerName="extract-utilities" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196100 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="0225d599-95cb-4ef4-aff0-9eea05552449" containerName="extract-utilities" Jan 28 15:42:29 crc kubenswrapper[4893]: E0128 15:42:29.196110 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196116 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: E0128 15:42:29.196124 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" containerName="extract-utilities" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196130 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" containerName="extract-utilities" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196260 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196270 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="0225d599-95cb-4ef4-aff0-9eea05552449" containerName="registry-server" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196280 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196287 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196294 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196309 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196316 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="4844f6c0-34e9-4b3c-8ca3-d1dc4d9ccaed" containerName="registry-server" Jan 28 15:42:29 crc kubenswrapper[4893]: E0128 15:42:29.196461 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196485 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: E0128 15:42:29.196495 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196501 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196637 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.196645 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1771f4b-e3fc-4a93-8a60-c9c53f248e02" containerName="nova-manage" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.197164 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2b4d6/must-gather-wlf6l" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.199281 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2b4d6"/"openshift-service-ca.crt" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.200100 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2b4d6"/"kube-root-ca.crt" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.213557 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2b4d6/must-gather-wlf6l"] Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.244526 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmd8k\" (UniqueName: \"kubernetes.io/projected/a70520a2-7db0-4c8e-b0e6-18d66c6d0e76-kube-api-access-tmd8k\") pod \"must-gather-wlf6l\" (UID: \"a70520a2-7db0-4c8e-b0e6-18d66c6d0e76\") " pod="openshift-must-gather-2b4d6/must-gather-wlf6l" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.244625 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a70520a2-7db0-4c8e-b0e6-18d66c6d0e76-must-gather-output\") pod \"must-gather-wlf6l\" (UID: \"a70520a2-7db0-4c8e-b0e6-18d66c6d0e76\") " pod="openshift-must-gather-2b4d6/must-gather-wlf6l" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.346554 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmd8k\" (UniqueName: \"kubernetes.io/projected/a70520a2-7db0-4c8e-b0e6-18d66c6d0e76-kube-api-access-tmd8k\") pod \"must-gather-wlf6l\" (UID: \"a70520a2-7db0-4c8e-b0e6-18d66c6d0e76\") " pod="openshift-must-gather-2b4d6/must-gather-wlf6l" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.346702 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a70520a2-7db0-4c8e-b0e6-18d66c6d0e76-must-gather-output\") pod \"must-gather-wlf6l\" (UID: \"a70520a2-7db0-4c8e-b0e6-18d66c6d0e76\") " pod="openshift-must-gather-2b4d6/must-gather-wlf6l" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.347465 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a70520a2-7db0-4c8e-b0e6-18d66c6d0e76-must-gather-output\") pod \"must-gather-wlf6l\" (UID: \"a70520a2-7db0-4c8e-b0e6-18d66c6d0e76\") " pod="openshift-must-gather-2b4d6/must-gather-wlf6l" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.407551 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmd8k\" (UniqueName: \"kubernetes.io/projected/a70520a2-7db0-4c8e-b0e6-18d66c6d0e76-kube-api-access-tmd8k\") pod \"must-gather-wlf6l\" (UID: \"a70520a2-7db0-4c8e-b0e6-18d66c6d0e76\") " pod="openshift-must-gather-2b4d6/must-gather-wlf6l" Jan 28 15:42:29 crc kubenswrapper[4893]: I0128 15:42:29.518844 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2b4d6/must-gather-wlf6l" Jan 28 15:42:30 crc kubenswrapper[4893]: I0128 15:42:30.005242 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2b4d6/must-gather-wlf6l"] Jan 28 15:42:30 crc kubenswrapper[4893]: I0128 15:42:30.015404 4893 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:42:30 crc kubenswrapper[4893]: I0128 15:42:30.719265 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2b4d6/must-gather-wlf6l" event={"ID":"a70520a2-7db0-4c8e-b0e6-18d66c6d0e76","Type":"ContainerStarted","Data":"70a98cebb8dd40cd7339d786983fdbf7beb20f584058f22da8522786dea28be0"} Jan 28 15:42:34 crc kubenswrapper[4893]: I0128 15:42:34.892340 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:42:34 crc kubenswrapper[4893]: E0128 15:42:34.893206 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:42:36 crc kubenswrapper[4893]: I0128 15:42:36.770994 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2b4d6/must-gather-wlf6l" event={"ID":"a70520a2-7db0-4c8e-b0e6-18d66c6d0e76","Type":"ContainerStarted","Data":"192d4d6094779e7e00ad31b96e2d3754d036acb7da7c0b24623424aa129f4866"} Jan 28 15:42:37 crc kubenswrapper[4893]: I0128 15:42:37.779457 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2b4d6/must-gather-wlf6l" event={"ID":"a70520a2-7db0-4c8e-b0e6-18d66c6d0e76","Type":"ContainerStarted","Data":"36c446eefdf48b09fc0dd34321adc2a2dad4517320e1038ccd137b74c9f34718"} Jan 28 15:42:37 crc kubenswrapper[4893]: I0128 15:42:37.800850 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2b4d6/must-gather-wlf6l" podStartSLOduration=2.284717877 podStartE2EDuration="8.80082972s" podCreationTimestamp="2026-01-28 15:42:29 +0000 UTC" firstStartedPulling="2026-01-28 15:42:30.015223455 +0000 UTC m=+2467.788838483" lastFinishedPulling="2026-01-28 15:42:36.531335298 +0000 UTC m=+2474.304950326" observedRunningTime="2026-01-28 15:42:37.799258157 +0000 UTC m=+2475.572873185" watchObservedRunningTime="2026-01-28 15:42:37.80082972 +0000 UTC m=+2475.574444748" Jan 28 15:42:46 crc kubenswrapper[4893]: I0128 15:42:46.892868 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:42:46 crc kubenswrapper[4893]: E0128 15:42:46.894200 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:43:00 crc kubenswrapper[4893]: I0128 15:43:00.893736 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:43:00 crc kubenswrapper[4893]: E0128 15:43:00.894568 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:43:11 crc kubenswrapper[4893]: I0128 15:43:11.891687 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:43:11 crc kubenswrapper[4893]: E0128 15:43:11.892448 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:43:22 crc kubenswrapper[4893]: I0128 15:43:22.896643 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:43:22 crc kubenswrapper[4893]: E0128 15:43:22.897416 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:43:33 crc kubenswrapper[4893]: I0128 15:43:33.891286 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:43:33 crc kubenswrapper[4893]: E0128 15:43:33.891963 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:43:40 crc kubenswrapper[4893]: I0128 15:43:40.407616 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg_8cb55e6c-bd6a-496e-a2bd-85b72cfb8146/util/0.log" Jan 28 15:43:40 crc kubenswrapper[4893]: I0128 15:43:40.557308 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg_8cb55e6c-bd6a-496e-a2bd-85b72cfb8146/util/0.log" Jan 28 15:43:40 crc kubenswrapper[4893]: I0128 15:43:40.588556 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg_8cb55e6c-bd6a-496e-a2bd-85b72cfb8146/pull/0.log" Jan 28 15:43:40 crc kubenswrapper[4893]: I0128 15:43:40.602078 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg_8cb55e6c-bd6a-496e-a2bd-85b72cfb8146/pull/0.log" Jan 28 15:43:40 crc kubenswrapper[4893]: I0128 15:43:40.781901 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg_8cb55e6c-bd6a-496e-a2bd-85b72cfb8146/pull/0.log" Jan 28 15:43:40 crc kubenswrapper[4893]: I0128 15:43:40.782395 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg_8cb55e6c-bd6a-496e-a2bd-85b72cfb8146/extract/0.log" Jan 28 15:43:40 crc kubenswrapper[4893]: I0128 15:43:40.803265 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ded8731a0d326f7c50ac03f7234a3b30852589d03e2cff042feef35a3vgflg_8cb55e6c-bd6a-496e-a2bd-85b72cfb8146/util/0.log" Jan 28 15:43:40 crc kubenswrapper[4893]: I0128 15:43:40.971267 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l_e0602f55-847f-4987-ba4c-9aa5fb47ad7d/util/0.log" Jan 28 15:43:41 crc kubenswrapper[4893]: I0128 15:43:41.127147 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l_e0602f55-847f-4987-ba4c-9aa5fb47ad7d/util/0.log" Jan 28 15:43:41 crc kubenswrapper[4893]: I0128 15:43:41.133298 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l_e0602f55-847f-4987-ba4c-9aa5fb47ad7d/pull/0.log" Jan 28 15:43:41 crc kubenswrapper[4893]: I0128 15:43:41.136027 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l_e0602f55-847f-4987-ba4c-9aa5fb47ad7d/pull/0.log" Jan 28 15:43:41 crc kubenswrapper[4893]: I0128 15:43:41.269895 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l_e0602f55-847f-4987-ba4c-9aa5fb47ad7d/util/0.log" Jan 28 15:43:41 crc kubenswrapper[4893]: I0128 15:43:41.320047 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l_e0602f55-847f-4987-ba4c-9aa5fb47ad7d/pull/0.log" Jan 28 15:43:41 crc kubenswrapper[4893]: I0128 15:43:41.320827 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_35c4c6fdcafae2854eda99d3bb454204437af2d2be509b1eac2bcfc5bbjxp5l_e0602f55-847f-4987-ba4c-9aa5fb47ad7d/extract/0.log" Jan 28 15:43:41 crc kubenswrapper[4893]: I0128 15:43:41.469517 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-p6nxj_c2188ba2-ad62-4873-abfe-fa7ad88b57a6/manager/0.log" Jan 28 15:43:41 crc kubenswrapper[4893]: I0128 15:43:41.555910 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-vdcjn_72d2e324-70de-4019-9673-0a86620ca028/manager/0.log" Jan 28 15:43:41 crc kubenswrapper[4893]: I0128 15:43:41.676926 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-jnhg7_17019a37-b628-4464-b037-470c2be80308/manager/0.log" Jan 28 15:43:41 crc kubenswrapper[4893]: I0128 15:43:41.736777 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-dlrsm_4179ac2f-dd41-4cd3-8558-6daba8252582/manager/0.log" Jan 28 15:43:41 crc kubenswrapper[4893]: I0128 15:43:41.918918 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-j8x44_0e525c35-621a-43f8-a8c6-9a472607373d/manager/0.log" Jan 28 15:43:41 crc kubenswrapper[4893]: I0128 15:43:41.925419 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-dqldg_0dcd4cb9-92c5-4fb0-9718-79fe6b7d2cea/manager/0.log" Jan 28 15:43:42 crc kubenswrapper[4893]: I0128 15:43:42.145308 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-jfx6g_20c9ab96-9196-4834-b516-8d1c9564bf35/manager/0.log" Jan 28 15:43:42 crc kubenswrapper[4893]: I0128 15:43:42.199625 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-rg997_1a360ec7-efa3-4972-a655-3e21de960aec/manager/0.log" Jan 28 15:43:42 crc kubenswrapper[4893]: I0128 15:43:42.375790 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-nd8rm_d578cfaa-0b09-476e-9cd0-abd3d6274bd7/manager/0.log" Jan 28 15:43:42 crc kubenswrapper[4893]: I0128 15:43:42.408356 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-4rgm2_7740f64d-b660-493b-b3f5-1041a0ce3061/manager/0.log" Jan 28 15:43:42 crc kubenswrapper[4893]: I0128 15:43:42.600085 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-qbfns_a5872ed3-9a06-4bd2-b592-b42c548a1db4/manager/0.log" Jan 28 15:43:42 crc kubenswrapper[4893]: I0128 15:43:42.620596 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-2qgj6_e1e458d4-37a1-4111-9e2d-fa49cbdd9e08/manager/0.log" Jan 28 15:43:42 crc kubenswrapper[4893]: I0128 15:43:42.837940 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-99pss_04786631-a21b-4006-ab43-c98ac66a34cb/registry-server/0.log" Jan 28 15:43:43 crc kubenswrapper[4893]: I0128 15:43:43.028989 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-b6cft_379dbcd5-96e3-4563-ac73-7264f4b90d68/manager/0.log" Jan 28 15:43:43 crc kubenswrapper[4893]: I0128 15:43:43.117394 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-78947fbfb8-7gj7q_6f1e8a13-7c32-4990-b658-0985329d5811/manager/0.log" Jan 28 15:43:43 crc kubenswrapper[4893]: I0128 15:43:43.150858 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854ngrtt_bfe9e7f0-b5aa-48a6-9487-e1765752c644/manager/0.log" Jan 28 15:43:43 crc kubenswrapper[4893]: I0128 15:43:43.351878 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8dszf_1e49f4d1-1856-44a5-91a5-86833c5e9e0c/registry-server/0.log" Jan 28 15:43:43 crc kubenswrapper[4893]: I0128 15:43:43.563941 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5fd66b5d9c-j5x2h_24fb3958-2b40-4b9d-90ee-591dafc3987e/manager/0.log" Jan 28 15:43:43 crc kubenswrapper[4893]: I0128 15:43:43.594796 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-b276g_b70555f3-c876-49fc-bd77-83efa82abac7/manager/0.log" Jan 28 15:43:43 crc kubenswrapper[4893]: I0128 15:43:43.699424 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-ld4p5_9a867ab9-ad43-409c-9d85-0ef229c5e25f/manager/0.log" Jan 28 15:43:43 crc kubenswrapper[4893]: I0128 15:43:43.834499 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-njb2l_d2a88a4d-0cb7-40fd-8e25-74e67785af15/operator/0.log" Jan 28 15:43:43 crc kubenswrapper[4893]: I0128 15:43:43.948727 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-bnr2s_f1bf10ee-2d99-4b1b-ab99-ae2066b96522/manager/0.log" Jan 28 15:43:44 crc kubenswrapper[4893]: I0128 15:43:44.043522 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-bsh7f_2dee9e4e-11c8-4db6-a457-6f7bbf047f70/manager/0.log" Jan 28 15:43:44 crc kubenswrapper[4893]: I0128 15:43:44.170959 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-zjrm8_651741dd-f535-40e3-ba34-96b9ce51cf6a/manager/0.log" Jan 28 15:43:44 crc kubenswrapper[4893]: I0128 15:43:44.237743 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-q9t8p_9f55f343-0f75-4fed-ab7b-71c8dddd4af3/manager/0.log" Jan 28 15:43:48 crc kubenswrapper[4893]: I0128 15:43:48.892803 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:43:48 crc kubenswrapper[4893]: E0128 15:43:48.893919 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:43:53 crc kubenswrapper[4893]: I0128 15:43:53.047971 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hc6tn"] Jan 28 15:43:53 crc kubenswrapper[4893]: I0128 15:43:53.056093 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hc6tn"] Jan 28 15:43:54 crc kubenswrapper[4893]: I0128 15:43:54.036140 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx"] Jan 28 15:43:54 crc kubenswrapper[4893]: I0128 15:43:54.068978 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-a703-account-create-update-r88t7"] Jan 28 15:43:54 crc kubenswrapper[4893]: I0128 15:43:54.077260 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-fxmm7"] Jan 28 15:43:54 crc kubenswrapper[4893]: I0128 15:43:54.085635 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-a703-account-create-update-r88t7"] Jan 28 15:43:54 crc kubenswrapper[4893]: I0128 15:43:54.095536 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-9d70-account-create-update-97bqx"] Jan 28 15:43:54 crc kubenswrapper[4893]: I0128 15:43:54.100973 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-fxmm7"] Jan 28 15:43:54 crc kubenswrapper[4893]: I0128 15:43:54.901654 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25b927e3-d3f5-4343-af70-bc2eb39a539c" path="/var/lib/kubelet/pods/25b927e3-d3f5-4343-af70-bc2eb39a539c/volumes" Jan 28 15:43:54 crc kubenswrapper[4893]: I0128 15:43:54.902410 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb0a658-a2dc-4442-a362-e1a6fd576848" path="/var/lib/kubelet/pods/4bb0a658-a2dc-4442-a362-e1a6fd576848/volumes" Jan 28 15:43:54 crc kubenswrapper[4893]: I0128 15:43:54.903224 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51a939d5-f485-40b5-bc7b-05d3e063db83" path="/var/lib/kubelet/pods/51a939d5-f485-40b5-bc7b-05d3e063db83/volumes" Jan 28 15:43:54 crc kubenswrapper[4893]: I0128 15:43:54.903953 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d555b18f-0774-4e4c-9b9d-10ee1335d432" path="/var/lib/kubelet/pods/d555b18f-0774-4e4c-9b9d-10ee1335d432/volumes" Jan 28 15:43:55 crc kubenswrapper[4893]: I0128 15:43:55.027985 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-vxgfw"] Jan 28 15:43:55 crc kubenswrapper[4893]: I0128 15:43:55.034727 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx"] Jan 28 15:43:55 crc kubenswrapper[4893]: I0128 15:43:55.051566 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-vxgfw"] Jan 28 15:43:55 crc kubenswrapper[4893]: I0128 15:43:55.058909 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-2799-account-create-update-rspsx"] Jan 28 15:43:56 crc kubenswrapper[4893]: I0128 15:43:56.901554 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4549a6c6-f4a4-463a-8b6e-2a0d7edeae42" path="/var/lib/kubelet/pods/4549a6c6-f4a4-463a-8b6e-2a0d7edeae42/volumes" Jan 28 15:43:56 crc kubenswrapper[4893]: I0128 15:43:56.902764 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bf5b624-d148-4c17-8824-77512ecaadba" path="/var/lib/kubelet/pods/5bf5b624-d148-4c17-8824-77512ecaadba/volumes" Jan 28 15:44:00 crc kubenswrapper[4893]: I0128 15:44:00.892436 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:44:00 crc kubenswrapper[4893]: E0128 15:44:00.893408 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:44:02 crc kubenswrapper[4893]: I0128 15:44:02.806493 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-bdn99_36ea2b14-0c67-4a42-b9cf-cb1a8aabc1a6/control-plane-machine-set-operator/0.log" Jan 28 15:44:02 crc kubenswrapper[4893]: I0128 15:44:02.984500 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8ppbb_ef3c4a5f-725d-4be0-b800-ab95fba9e33e/kube-rbac-proxy/0.log" Jan 28 15:44:03 crc kubenswrapper[4893]: I0128 15:44:03.003915 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8ppbb_ef3c4a5f-725d-4be0-b800-ab95fba9e33e/machine-api-operator/0.log" Jan 28 15:44:04 crc kubenswrapper[4893]: I0128 15:44:04.029182 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w"] Jan 28 15:44:04 crc kubenswrapper[4893]: I0128 15:44:04.037763 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jr59w"] Jan 28 15:44:04 crc kubenswrapper[4893]: I0128 15:44:04.901503 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e84c5ebf-c963-4acb-b64f-107efda9798d" path="/var/lib/kubelet/pods/e84c5ebf-c963-4acb-b64f-107efda9798d/volumes" Jan 28 15:44:11 crc kubenswrapper[4893]: I0128 15:44:11.927959 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:44:11 crc kubenswrapper[4893]: E0128 15:44:11.928820 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:44:15 crc kubenswrapper[4893]: I0128 15:44:15.206696 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-ffj99_b3f91a14-0acc-4ebe-8e34-4d8be1758b80/cert-manager-controller/0.log" Jan 28 15:44:15 crc kubenswrapper[4893]: I0128 15:44:15.444730 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-5gtt7_20d6abd9-a533-4fcd-abab-402ace4af89f/cert-manager-cainjector/0.log" Jan 28 15:44:15 crc kubenswrapper[4893]: I0128 15:44:15.524083 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-nlv8z_8f982557-1def-4e14-868b-59a20e936677/cert-manager-webhook/0.log" Jan 28 15:44:23 crc kubenswrapper[4893]: I0128 15:44:23.041593 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk"] Jan 28 15:44:23 crc kubenswrapper[4893]: I0128 15:44:23.047985 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-9kfvk"] Jan 28 15:44:24 crc kubenswrapper[4893]: I0128 15:44:24.900048 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5788fc83-55a9-489b-b094-e6a36fe58124" path="/var/lib/kubelet/pods/5788fc83-55a9-489b-b094-e6a36fe58124/volumes" Jan 28 15:44:25 crc kubenswrapper[4893]: I0128 15:44:25.026912 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl"] Jan 28 15:44:25 crc kubenswrapper[4893]: I0128 15:44:25.040514 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-2bxvl"] Jan 28 15:44:25 crc kubenswrapper[4893]: I0128 15:44:25.891449 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:44:25 crc kubenswrapper[4893]: E0128 15:44:25.891767 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:44:26 crc kubenswrapper[4893]: I0128 15:44:26.903468 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79f84931-160b-409c-bb0b-193fd8988158" path="/var/lib/kubelet/pods/79f84931-160b-409c-bb0b-193fd8988158/volumes" Jan 28 15:44:28 crc kubenswrapper[4893]: I0128 15:44:28.849417 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-shqgw_8a3c7538-e078-4e89-b34d-dd128942e19d/nmstate-console-plugin/0.log" Jan 28 15:44:29 crc kubenswrapper[4893]: I0128 15:44:29.014318 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2dw5k_8dcdd494-a746-4e9f-89ad-da96e2b2ab17/nmstate-handler/0.log" Jan 28 15:44:29 crc kubenswrapper[4893]: I0128 15:44:29.067201 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ft8jl_caa645f7-f683-4bda-851a-91732a41d8fc/kube-rbac-proxy/0.log" Jan 28 15:44:29 crc kubenswrapper[4893]: I0128 15:44:29.153790 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ft8jl_caa645f7-f683-4bda-851a-91732a41d8fc/nmstate-metrics/0.log" Jan 28 15:44:29 crc kubenswrapper[4893]: I0128 15:44:29.246252 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-82jmt_d87ded33-b86a-4245-b564-87d682532ec8/nmstate-operator/0.log" Jan 28 15:44:29 crc kubenswrapper[4893]: I0128 15:44:29.358757 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-rprnk_e3a5ef47-65ea-4135-ad67-c83b0aa175f4/nmstate-webhook/0.log" Jan 28 15:44:34 crc kubenswrapper[4893]: I0128 15:44:34.954301 4893 scope.go:117] "RemoveContainer" containerID="e42174f11cbe4f4338d91cb5807ee52abdfb400cc21db55b31781e752240ccdf" Jan 28 15:44:34 crc kubenswrapper[4893]: I0128 15:44:34.975012 4893 scope.go:117] "RemoveContainer" containerID="72b971eb6981c8a776084ddebc87b507bcc6a2aecbba7c3b172051f37d37e6c8" Jan 28 15:44:35 crc kubenswrapper[4893]: I0128 15:44:35.010704 4893 scope.go:117] "RemoveContainer" containerID="6cef781434e99c9e8c62fda5ccb4b78aee1df550e2b5077264b378c04d68a3b8" Jan 28 15:44:35 crc kubenswrapper[4893]: I0128 15:44:35.039559 4893 scope.go:117] "RemoveContainer" containerID="80ac0038f893c1b3f591dd9edac2e4896a0c1167cae42df06d7833373f00ec78" Jan 28 15:44:35 crc kubenswrapper[4893]: I0128 15:44:35.096696 4893 scope.go:117] "RemoveContainer" containerID="7f2401bf212a6af535113f18466517668c066e7944c8fe333b0d1a142cb2e55a" Jan 28 15:44:35 crc kubenswrapper[4893]: I0128 15:44:35.113878 4893 scope.go:117] "RemoveContainer" containerID="22916b06bb8bc7cd3caf7baf6c7a38757f439eae087bdf60044f4265c822d466" Jan 28 15:44:35 crc kubenswrapper[4893]: I0128 15:44:35.148169 4893 scope.go:117] "RemoveContainer" containerID="d4dc0c8918a1f57be549680e1e2559cc57b0bb1d562071ed9ec465286db525e3" Jan 28 15:44:35 crc kubenswrapper[4893]: I0128 15:44:35.165010 4893 scope.go:117] "RemoveContainer" containerID="877bb78ea59beb334968fdcc181ffbb610cd0da23d315de0bcf0e84bdb1f57df" Jan 28 15:44:35 crc kubenswrapper[4893]: I0128 15:44:35.199742 4893 scope.go:117] "RemoveContainer" containerID="91ee63ce9b5c9bd1a06218d6a8da96c13442369128814ee61c62d7597ef4bd42" Jan 28 15:44:37 crc kubenswrapper[4893]: I0128 15:44:37.891766 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:44:37 crc kubenswrapper[4893]: E0128 15:44:37.892298 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:44:43 crc kubenswrapper[4893]: I0128 15:44:43.036099 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn"] Jan 28 15:44:43 crc kubenswrapper[4893]: I0128 15:44:43.044593 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-nbnnn"] Jan 28 15:44:44 crc kubenswrapper[4893]: I0128 15:44:44.901149 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b554e78-6b57-406d-8a05-0e2931db92b7" path="/var/lib/kubelet/pods/2b554e78-6b57-406d-8a05-0e2931db92b7/volumes" Jan 28 15:44:48 crc kubenswrapper[4893]: I0128 15:44:48.892574 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:44:48 crc kubenswrapper[4893]: E0128 15:44:48.893499 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:44:53 crc kubenswrapper[4893]: I0128 15:44:53.703707 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-9vnsm_f3f10444-f010-494d-936a-b3634dde0503/kube-rbac-proxy/0.log" Jan 28 15:44:53 crc kubenswrapper[4893]: I0128 15:44:53.864147 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-9vnsm_f3f10444-f010-494d-936a-b3634dde0503/controller/0.log" Jan 28 15:44:53 crc kubenswrapper[4893]: I0128 15:44:53.898349 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/cp-frr-files/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.138714 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/cp-frr-files/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.157166 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/cp-metrics/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.171355 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/cp-reloader/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.198826 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/cp-reloader/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.405748 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/cp-frr-files/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.415872 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/cp-metrics/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.417788 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/cp-metrics/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.421618 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/cp-reloader/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.579017 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/cp-reloader/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.589266 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/cp-metrics/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.608428 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/cp-frr-files/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.631369 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/controller/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.767218 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/frr-metrics/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.823295 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/kube-rbac-proxy-frr/0.log" Jan 28 15:44:54 crc kubenswrapper[4893]: I0128 15:44:54.870647 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/kube-rbac-proxy/0.log" Jan 28 15:44:55 crc kubenswrapper[4893]: I0128 15:44:55.025772 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/reloader/0.log" Jan 28 15:44:55 crc kubenswrapper[4893]: I0128 15:44:55.113185 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-9qz4w_7e681161-fdf4-4d05-bc40-328c7368b9ac/frr-k8s-webhook-server/0.log" Jan 28 15:44:55 crc kubenswrapper[4893]: I0128 15:44:55.308241 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5fb7f789ff-r8s24_6a47ff79-34bc-48fe-aade-b4c90918419d/manager/0.log" Jan 28 15:44:55 crc kubenswrapper[4893]: I0128 15:44:55.418409 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6d5bf7b7c8-trnh8_228cc148-34cc-48ee-9a91-61a50b8d2759/webhook-server/0.log" Jan 28 15:44:55 crc kubenswrapper[4893]: I0128 15:44:55.576440 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wr85l_5cdd1458-e530-4bbc-9103-12b9f43ccbe9/kube-rbac-proxy/0.log" Jan 28 15:44:55 crc kubenswrapper[4893]: I0128 15:44:55.911935 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wr85l_5cdd1458-e530-4bbc-9103-12b9f43ccbe9/speaker/0.log" Jan 28 15:44:56 crc kubenswrapper[4893]: I0128 15:44:56.014789 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-scpzv_14936c88-97a1-45bd-96f7-947ea39807a0/frr/0.log" Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.147525 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87"] Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.150601 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.152841 4893 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.152893 4893 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.155257 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87"] Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.225681 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/072629d4-2a9d-4f69-9193-e68402fb5561-config-volume\") pod \"collect-profiles-29493585-8cv87\" (UID: \"072629d4-2a9d-4f69-9193-e68402fb5561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.225884 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqld4\" (UniqueName: \"kubernetes.io/projected/072629d4-2a9d-4f69-9193-e68402fb5561-kube-api-access-qqld4\") pod \"collect-profiles-29493585-8cv87\" (UID: \"072629d4-2a9d-4f69-9193-e68402fb5561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.225955 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/072629d4-2a9d-4f69-9193-e68402fb5561-secret-volume\") pod \"collect-profiles-29493585-8cv87\" (UID: \"072629d4-2a9d-4f69-9193-e68402fb5561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.327949 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/072629d4-2a9d-4f69-9193-e68402fb5561-config-volume\") pod \"collect-profiles-29493585-8cv87\" (UID: \"072629d4-2a9d-4f69-9193-e68402fb5561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.328079 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqld4\" (UniqueName: \"kubernetes.io/projected/072629d4-2a9d-4f69-9193-e68402fb5561-kube-api-access-qqld4\") pod \"collect-profiles-29493585-8cv87\" (UID: \"072629d4-2a9d-4f69-9193-e68402fb5561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.328106 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/072629d4-2a9d-4f69-9193-e68402fb5561-secret-volume\") pod \"collect-profiles-29493585-8cv87\" (UID: \"072629d4-2a9d-4f69-9193-e68402fb5561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.329041 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/072629d4-2a9d-4f69-9193-e68402fb5561-config-volume\") pod \"collect-profiles-29493585-8cv87\" (UID: \"072629d4-2a9d-4f69-9193-e68402fb5561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.340135 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/072629d4-2a9d-4f69-9193-e68402fb5561-secret-volume\") pod \"collect-profiles-29493585-8cv87\" (UID: \"072629d4-2a9d-4f69-9193-e68402fb5561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.347360 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqld4\" (UniqueName: \"kubernetes.io/projected/072629d4-2a9d-4f69-9193-e68402fb5561-kube-api-access-qqld4\") pod \"collect-profiles-29493585-8cv87\" (UID: \"072629d4-2a9d-4f69-9193-e68402fb5561\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.474788 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.904057 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87"] Jan 28 15:45:00 crc kubenswrapper[4893]: I0128 15:45:00.932884 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" event={"ID":"072629d4-2a9d-4f69-9193-e68402fb5561","Type":"ContainerStarted","Data":"67233351c82a47bc877e241a6dc070a897690489a035ecddcac979afe66e0007"} Jan 28 15:45:01 crc kubenswrapper[4893]: I0128 15:45:01.941338 4893 generic.go:334] "Generic (PLEG): container finished" podID="072629d4-2a9d-4f69-9193-e68402fb5561" containerID="0feff3c9f04e2099ece8e91441bb8e18a560bd95f1d7d7e6b64270a3cbd2c803" exitCode=0 Jan 28 15:45:01 crc kubenswrapper[4893]: I0128 15:45:01.941450 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" event={"ID":"072629d4-2a9d-4f69-9193-e68402fb5561","Type":"ContainerDied","Data":"0feff3c9f04e2099ece8e91441bb8e18a560bd95f1d7d7e6b64270a3cbd2c803"} Jan 28 15:45:03 crc kubenswrapper[4893]: I0128 15:45:03.246884 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" Jan 28 15:45:03 crc kubenswrapper[4893]: I0128 15:45:03.373974 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/072629d4-2a9d-4f69-9193-e68402fb5561-config-volume\") pod \"072629d4-2a9d-4f69-9193-e68402fb5561\" (UID: \"072629d4-2a9d-4f69-9193-e68402fb5561\") " Jan 28 15:45:03 crc kubenswrapper[4893]: I0128 15:45:03.374023 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/072629d4-2a9d-4f69-9193-e68402fb5561-secret-volume\") pod \"072629d4-2a9d-4f69-9193-e68402fb5561\" (UID: \"072629d4-2a9d-4f69-9193-e68402fb5561\") " Jan 28 15:45:03 crc kubenswrapper[4893]: I0128 15:45:03.374083 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqld4\" (UniqueName: \"kubernetes.io/projected/072629d4-2a9d-4f69-9193-e68402fb5561-kube-api-access-qqld4\") pod \"072629d4-2a9d-4f69-9193-e68402fb5561\" (UID: \"072629d4-2a9d-4f69-9193-e68402fb5561\") " Jan 28 15:45:03 crc kubenswrapper[4893]: I0128 15:45:03.374707 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/072629d4-2a9d-4f69-9193-e68402fb5561-config-volume" (OuterVolumeSpecName: "config-volume") pod "072629d4-2a9d-4f69-9193-e68402fb5561" (UID: "072629d4-2a9d-4f69-9193-e68402fb5561"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:03 crc kubenswrapper[4893]: I0128 15:45:03.378995 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/072629d4-2a9d-4f69-9193-e68402fb5561-kube-api-access-qqld4" (OuterVolumeSpecName: "kube-api-access-qqld4") pod "072629d4-2a9d-4f69-9193-e68402fb5561" (UID: "072629d4-2a9d-4f69-9193-e68402fb5561"). InnerVolumeSpecName "kube-api-access-qqld4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:03 crc kubenswrapper[4893]: I0128 15:45:03.383742 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/072629d4-2a9d-4f69-9193-e68402fb5561-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "072629d4-2a9d-4f69-9193-e68402fb5561" (UID: "072629d4-2a9d-4f69-9193-e68402fb5561"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:03 crc kubenswrapper[4893]: I0128 15:45:03.476128 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqld4\" (UniqueName: \"kubernetes.io/projected/072629d4-2a9d-4f69-9193-e68402fb5561-kube-api-access-qqld4\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:03 crc kubenswrapper[4893]: I0128 15:45:03.476171 4893 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/072629d4-2a9d-4f69-9193-e68402fb5561-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:03 crc kubenswrapper[4893]: I0128 15:45:03.476185 4893 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/072629d4-2a9d-4f69-9193-e68402fb5561-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:03 crc kubenswrapper[4893]: I0128 15:45:03.891765 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:45:03 crc kubenswrapper[4893]: E0128 15:45:03.892363 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:45:03 crc kubenswrapper[4893]: I0128 15:45:03.957794 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" event={"ID":"072629d4-2a9d-4f69-9193-e68402fb5561","Type":"ContainerDied","Data":"67233351c82a47bc877e241a6dc070a897690489a035ecddcac979afe66e0007"} Jan 28 15:45:03 crc kubenswrapper[4893]: I0128 15:45:03.957838 4893 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67233351c82a47bc877e241a6dc070a897690489a035ecddcac979afe66e0007" Jan 28 15:45:03 crc kubenswrapper[4893]: I0128 15:45:03.957899 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8cv87" Jan 28 15:45:04 crc kubenswrapper[4893]: I0128 15:45:04.329028 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx"] Jan 28 15:45:04 crc kubenswrapper[4893]: I0128 15:45:04.336203 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493540-2f7hx"] Jan 28 15:45:04 crc kubenswrapper[4893]: I0128 15:45:04.903349 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e61761-82dd-4ac8-a847-1727769f4424" path="/var/lib/kubelet/pods/70e61761-82dd-4ac8-a847-1727769f4424/volumes" Jan 28 15:45:11 crc kubenswrapper[4893]: I0128 15:45:11.107750 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-86bf966444-cll8k_a79fa730-be33-48f7-9ef0-7964e2afbede/keystone-api/0.log" Jan 28 15:45:11 crc kubenswrapper[4893]: I0128 15:45:11.428150 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_378ab36a-3a2c-4a6d-836f-92eba12307fe/nova-kuttl-api-api/0.log" Jan 28 15:45:11 crc kubenswrapper[4893]: I0128 15:45:11.674947 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_378ab36a-3a2c-4a6d-836f-92eba12307fe/nova-kuttl-api-log/0.log" Jan 28 15:45:11 crc kubenswrapper[4893]: I0128 15:45:11.743490 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_22539c1b-d8a1-4f7d-b202-b33f849a21b4/nova-kuttl-cell0-conductor-conductor/0.log" Jan 28 15:45:11 crc kubenswrapper[4893]: I0128 15:45:11.988846 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_f05773d2-58b3-4e11-9962-45502872c375/nova-kuttl-cell1-conductor-conductor/0.log" Jan 28 15:45:12 crc kubenswrapper[4893]: I0128 15:45:12.199608 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_90e30875-ed7b-4c7e-b8ed-3deb340cfd2b/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 28 15:45:12 crc kubenswrapper[4893]: I0128 15:45:12.324409 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_28ce2e6b-b04e-4d88-a01b-101d056e8137/nova-kuttl-metadata-log/0.log" Jan 28 15:45:12 crc kubenswrapper[4893]: I0128 15:45:12.421514 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_28ce2e6b-b04e-4d88-a01b-101d056e8137/nova-kuttl-metadata-metadata/0.log" Jan 28 15:45:12 crc kubenswrapper[4893]: I0128 15:45:12.655564 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_00c7d078-56fd-4f9a-a20a-5dc498625eb1/nova-kuttl-scheduler-scheduler/0.log" Jan 28 15:45:12 crc kubenswrapper[4893]: I0128 15:45:12.752701 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_1418afdb-10ec-4cb7-853d-d0f755621625/mysql-bootstrap/0.log" Jan 28 15:45:12 crc kubenswrapper[4893]: I0128 15:45:12.882901 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_1418afdb-10ec-4cb7-853d-d0f755621625/mysql-bootstrap/0.log" Jan 28 15:45:12 crc kubenswrapper[4893]: I0128 15:45:12.961537 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_1418afdb-10ec-4cb7-853d-d0f755621625/galera/0.log" Jan 28 15:45:13 crc kubenswrapper[4893]: I0128 15:45:13.123489 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_6cf03c71-da90-490d-8f3c-f5646a45b9d6/mysql-bootstrap/0.log" Jan 28 15:45:13 crc kubenswrapper[4893]: I0128 15:45:13.327322 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_6cf03c71-da90-490d-8f3c-f5646a45b9d6/galera/0.log" Jan 28 15:45:13 crc kubenswrapper[4893]: I0128 15:45:13.356004 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_6cf03c71-da90-490d-8f3c-f5646a45b9d6/mysql-bootstrap/0.log" Jan 28 15:45:13 crc kubenswrapper[4893]: I0128 15:45:13.489082 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_4e4c3f33-0d15-4434-9940-21a310e1e272/memcached/0.log" Jan 28 15:45:13 crc kubenswrapper[4893]: I0128 15:45:13.509223 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_5a7bef9d-825c-491a-887c-651ea4b6ca59/openstackclient/0.log" Jan 28 15:45:13 crc kubenswrapper[4893]: I0128 15:45:13.586260 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-7dbd979f64-625pv_092a35ea-0d0f-4538-a702-fcf0a09e3683/placement-api/0.log" Jan 28 15:45:13 crc kubenswrapper[4893]: I0128 15:45:13.667852 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-7dbd979f64-625pv_092a35ea-0d0f-4538-a702-fcf0a09e3683/placement-log/0.log" Jan 28 15:45:13 crc kubenswrapper[4893]: I0128 15:45:13.785752 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_dcd1c126-70b7-46e1-8226-bc7dc353ecdb/setup-container/0.log" Jan 28 15:45:13 crc kubenswrapper[4893]: I0128 15:45:13.930284 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_dcd1c126-70b7-46e1-8226-bc7dc353ecdb/setup-container/0.log" Jan 28 15:45:13 crc kubenswrapper[4893]: I0128 15:45:13.997370 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_dcd1c126-70b7-46e1-8226-bc7dc353ecdb/rabbitmq/0.log" Jan 28 15:45:14 crc kubenswrapper[4893]: I0128 15:45:14.046218 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_67b2b466-ebc4-41d8-8b96-a285eb0609f5/setup-container/0.log" Jan 28 15:45:14 crc kubenswrapper[4893]: I0128 15:45:14.203297 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_67b2b466-ebc4-41d8-8b96-a285eb0609f5/rabbitmq/0.log" Jan 28 15:45:14 crc kubenswrapper[4893]: I0128 15:45:14.247685 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_67b2b466-ebc4-41d8-8b96-a285eb0609f5/setup-container/0.log" Jan 28 15:45:14 crc kubenswrapper[4893]: I0128 15:45:14.259544 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_01a81616-675d-43ec-acb2-7a4541b96771/setup-container/0.log" Jan 28 15:45:14 crc kubenswrapper[4893]: I0128 15:45:14.392099 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_01a81616-675d-43ec-acb2-7a4541b96771/setup-container/0.log" Jan 28 15:45:14 crc kubenswrapper[4893]: I0128 15:45:14.425077 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_01a81616-675d-43ec-acb2-7a4541b96771/rabbitmq/0.log" Jan 28 15:45:17 crc kubenswrapper[4893]: I0128 15:45:17.892109 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:45:17 crc kubenswrapper[4893]: E0128 15:45:17.892600 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:45:27 crc kubenswrapper[4893]: I0128 15:45:27.462967 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m_37c60b30-8d14-47d7-97ed-0f797932fe82/util/0.log" Jan 28 15:45:27 crc kubenswrapper[4893]: I0128 15:45:27.590041 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m_37c60b30-8d14-47d7-97ed-0f797932fe82/util/0.log" Jan 28 15:45:27 crc kubenswrapper[4893]: I0128 15:45:27.652306 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m_37c60b30-8d14-47d7-97ed-0f797932fe82/pull/0.log" Jan 28 15:45:27 crc kubenswrapper[4893]: I0128 15:45:27.695665 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m_37c60b30-8d14-47d7-97ed-0f797932fe82/pull/0.log" Jan 28 15:45:27 crc kubenswrapper[4893]: I0128 15:45:27.818915 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m_37c60b30-8d14-47d7-97ed-0f797932fe82/util/0.log" Jan 28 15:45:27 crc kubenswrapper[4893]: I0128 15:45:27.847025 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m_37c60b30-8d14-47d7-97ed-0f797932fe82/extract/0.log" Jan 28 15:45:27 crc kubenswrapper[4893]: I0128 15:45:27.877968 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aftw8m_37c60b30-8d14-47d7-97ed-0f797932fe82/pull/0.log" Jan 28 15:45:28 crc kubenswrapper[4893]: I0128 15:45:28.020805 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z_8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a/util/0.log" Jan 28 15:45:28 crc kubenswrapper[4893]: I0128 15:45:28.193827 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z_8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a/util/0.log" Jan 28 15:45:28 crc kubenswrapper[4893]: I0128 15:45:28.193921 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z_8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a/pull/0.log" Jan 28 15:45:28 crc kubenswrapper[4893]: I0128 15:45:28.221607 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z_8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a/pull/0.log" Jan 28 15:45:28 crc kubenswrapper[4893]: I0128 15:45:28.364249 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z_8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a/pull/0.log" Jan 28 15:45:28 crc kubenswrapper[4893]: I0128 15:45:28.382084 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z_8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a/util/0.log" Jan 28 15:45:28 crc kubenswrapper[4893]: I0128 15:45:28.411481 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrrl6z_8f1a58bf-a2e1-4d5e-8e3e-ba58edec1d9a/extract/0.log" Jan 28 15:45:28 crc kubenswrapper[4893]: I0128 15:45:28.738115 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn_da66fae5-bc9b-49b3-8ed8-729a9f353b67/util/0.log" Jan 28 15:45:28 crc kubenswrapper[4893]: I0128 15:45:28.879777 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn_da66fae5-bc9b-49b3-8ed8-729a9f353b67/util/0.log" Jan 28 15:45:28 crc kubenswrapper[4893]: I0128 15:45:28.937963 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn_da66fae5-bc9b-49b3-8ed8-729a9f353b67/pull/0.log" Jan 28 15:45:28 crc kubenswrapper[4893]: I0128 15:45:28.980103 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn_da66fae5-bc9b-49b3-8ed8-729a9f353b67/pull/0.log" Jan 28 15:45:29 crc kubenswrapper[4893]: I0128 15:45:29.125698 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn_da66fae5-bc9b-49b3-8ed8-729a9f353b67/pull/0.log" Jan 28 15:45:29 crc kubenswrapper[4893]: I0128 15:45:29.131028 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn_da66fae5-bc9b-49b3-8ed8-729a9f353b67/extract/0.log" Jan 28 15:45:29 crc kubenswrapper[4893]: I0128 15:45:29.149613 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fzbkn_da66fae5-bc9b-49b3-8ed8-729a9f353b67/util/0.log" Jan 28 15:45:29 crc kubenswrapper[4893]: I0128 15:45:29.309318 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jp7tn_4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631/extract-utilities/0.log" Jan 28 15:45:29 crc kubenswrapper[4893]: I0128 15:45:29.482837 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jp7tn_4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631/extract-utilities/0.log" Jan 28 15:45:29 crc kubenswrapper[4893]: I0128 15:45:29.483636 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jp7tn_4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631/extract-content/0.log" Jan 28 15:45:29 crc kubenswrapper[4893]: I0128 15:45:29.490459 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jp7tn_4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631/extract-content/0.log" Jan 28 15:45:29 crc kubenswrapper[4893]: I0128 15:45:29.631527 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jp7tn_4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631/extract-utilities/0.log" Jan 28 15:45:29 crc kubenswrapper[4893]: I0128 15:45:29.669859 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jp7tn_4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631/extract-content/0.log" Jan 28 15:45:29 crc kubenswrapper[4893]: I0128 15:45:29.913294 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v4pdb_5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7/extract-utilities/0.log" Jan 28 15:45:30 crc kubenswrapper[4893]: I0128 15:45:30.084044 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v4pdb_5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7/extract-utilities/0.log" Jan 28 15:45:30 crc kubenswrapper[4893]: I0128 15:45:30.109705 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v4pdb_5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7/extract-content/0.log" Jan 28 15:45:30 crc kubenswrapper[4893]: I0128 15:45:30.156922 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v4pdb_5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7/extract-content/0.log" Jan 28 15:45:30 crc kubenswrapper[4893]: I0128 15:45:30.225783 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jp7tn_4b9c92c1-9fb9-4eb9-b5e3-9b4354a34631/registry-server/0.log" Jan 28 15:45:30 crc kubenswrapper[4893]: I0128 15:45:30.360630 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v4pdb_5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7/extract-utilities/0.log" Jan 28 15:45:30 crc kubenswrapper[4893]: I0128 15:45:30.434123 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v4pdb_5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7/extract-content/0.log" Jan 28 15:45:30 crc kubenswrapper[4893]: I0128 15:45:30.669353 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xlssl_fc249fd3-d895-44db-8a63-38334231d809/extract-utilities/0.log" Jan 28 15:45:30 crc kubenswrapper[4893]: I0128 15:45:30.682795 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-prr4s_07fe07b9-ae23-4203-b85e-02462161f5b3/marketplace-operator/0.log" Jan 28 15:45:30 crc kubenswrapper[4893]: I0128 15:45:30.882754 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xlssl_fc249fd3-d895-44db-8a63-38334231d809/extract-utilities/0.log" Jan 28 15:45:30 crc kubenswrapper[4893]: I0128 15:45:30.900083 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xlssl_fc249fd3-d895-44db-8a63-38334231d809/extract-content/0.log" Jan 28 15:45:30 crc kubenswrapper[4893]: I0128 15:45:30.934870 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v4pdb_5ecfb5b1-5ca3-4ada-a9ee-85072c22cfd7/registry-server/0.log" Jan 28 15:45:30 crc kubenswrapper[4893]: I0128 15:45:30.951359 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xlssl_fc249fd3-d895-44db-8a63-38334231d809/extract-content/0.log" Jan 28 15:45:31 crc kubenswrapper[4893]: I0128 15:45:31.147218 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xlssl_fc249fd3-d895-44db-8a63-38334231d809/extract-utilities/0.log" Jan 28 15:45:31 crc kubenswrapper[4893]: I0128 15:45:31.186445 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xlssl_fc249fd3-d895-44db-8a63-38334231d809/extract-content/0.log" Jan 28 15:45:31 crc kubenswrapper[4893]: I0128 15:45:31.335971 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xlssl_fc249fd3-d895-44db-8a63-38334231d809/registry-server/0.log" Jan 28 15:45:31 crc kubenswrapper[4893]: I0128 15:45:31.357054 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7fw8_c21ef389-3376-4802-93c1-3115af586c8b/extract-utilities/0.log" Jan 28 15:45:31 crc kubenswrapper[4893]: I0128 15:45:31.526911 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7fw8_c21ef389-3376-4802-93c1-3115af586c8b/extract-utilities/0.log" Jan 28 15:45:31 crc kubenswrapper[4893]: I0128 15:45:31.529666 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7fw8_c21ef389-3376-4802-93c1-3115af586c8b/extract-content/0.log" Jan 28 15:45:31 crc kubenswrapper[4893]: I0128 15:45:31.532364 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7fw8_c21ef389-3376-4802-93c1-3115af586c8b/extract-content/0.log" Jan 28 15:45:31 crc kubenswrapper[4893]: I0128 15:45:31.673947 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7fw8_c21ef389-3376-4802-93c1-3115af586c8b/extract-utilities/0.log" Jan 28 15:45:31 crc kubenswrapper[4893]: I0128 15:45:31.697051 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7fw8_c21ef389-3376-4802-93c1-3115af586c8b/extract-content/0.log" Jan 28 15:45:31 crc kubenswrapper[4893]: I0128 15:45:31.891133 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:45:31 crc kubenswrapper[4893]: E0128 15:45:31.891342 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:45:32 crc kubenswrapper[4893]: I0128 15:45:32.109537 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b7fw8_c21ef389-3376-4802-93c1-3115af586c8b/registry-server/0.log" Jan 28 15:45:35 crc kubenswrapper[4893]: I0128 15:45:35.367005 4893 scope.go:117] "RemoveContainer" containerID="8f7c133abbe6fdcc809602427be1caa77a4ff32912b2aec60602b480e91b2f76" Jan 28 15:45:35 crc kubenswrapper[4893]: I0128 15:45:35.404751 4893 scope.go:117] "RemoveContainer" containerID="6bdc499c7e005d1c8dcb20fc5a067717620c7df8396b4fbbf84d56ca8f3e40b6" Jan 28 15:45:45 crc kubenswrapper[4893]: I0128 15:45:45.892291 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:45:45 crc kubenswrapper[4893]: E0128 15:45:45.893030 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:45:58 crc kubenswrapper[4893]: I0128 15:45:58.892811 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:45:58 crc kubenswrapper[4893]: E0128 15:45:58.894212 4893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l2nht_openshift-machine-config-operator(b2ddd967-f9a8-464a-95de-512c9c5874fd)\"" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" Jan 28 15:46:12 crc kubenswrapper[4893]: I0128 15:46:12.900217 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:46:13 crc kubenswrapper[4893]: I0128 15:46:13.466281 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerStarted","Data":"4d3e643fbe9f36ca2c6a3682b66faf889a0fdc3126ace24f8151d5686981c487"} Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.028940 4893 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvxwx"] Jan 28 15:46:35 crc kubenswrapper[4893]: E0128 15:46:35.031796 4893 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="072629d4-2a9d-4f69-9193-e68402fb5561" containerName="collect-profiles" Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.031936 4893 state_mem.go:107] "Deleted CPUSet assignment" podUID="072629d4-2a9d-4f69-9193-e68402fb5561" containerName="collect-profiles" Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.035051 4893 memory_manager.go:354] "RemoveStaleState removing state" podUID="072629d4-2a9d-4f69-9193-e68402fb5561" containerName="collect-profiles" Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.046243 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.052896 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvxwx"] Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.111812 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d952026-d120-40ab-9ef5-bdd210e7961d-catalog-content\") pod \"redhat-operators-rvxwx\" (UID: \"1d952026-d120-40ab-9ef5-bdd210e7961d\") " pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.111863 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh5m7\" (UniqueName: \"kubernetes.io/projected/1d952026-d120-40ab-9ef5-bdd210e7961d-kube-api-access-dh5m7\") pod \"redhat-operators-rvxwx\" (UID: \"1d952026-d120-40ab-9ef5-bdd210e7961d\") " pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.111936 4893 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d952026-d120-40ab-9ef5-bdd210e7961d-utilities\") pod \"redhat-operators-rvxwx\" (UID: \"1d952026-d120-40ab-9ef5-bdd210e7961d\") " pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.214112 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d952026-d120-40ab-9ef5-bdd210e7961d-catalog-content\") pod \"redhat-operators-rvxwx\" (UID: \"1d952026-d120-40ab-9ef5-bdd210e7961d\") " pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.214175 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh5m7\" (UniqueName: \"kubernetes.io/projected/1d952026-d120-40ab-9ef5-bdd210e7961d-kube-api-access-dh5m7\") pod \"redhat-operators-rvxwx\" (UID: \"1d952026-d120-40ab-9ef5-bdd210e7961d\") " pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.214267 4893 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d952026-d120-40ab-9ef5-bdd210e7961d-utilities\") pod \"redhat-operators-rvxwx\" (UID: \"1d952026-d120-40ab-9ef5-bdd210e7961d\") " pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.215026 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d952026-d120-40ab-9ef5-bdd210e7961d-catalog-content\") pod \"redhat-operators-rvxwx\" (UID: \"1d952026-d120-40ab-9ef5-bdd210e7961d\") " pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.215156 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d952026-d120-40ab-9ef5-bdd210e7961d-utilities\") pod \"redhat-operators-rvxwx\" (UID: \"1d952026-d120-40ab-9ef5-bdd210e7961d\") " pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.249554 4893 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh5m7\" (UniqueName: \"kubernetes.io/projected/1d952026-d120-40ab-9ef5-bdd210e7961d-kube-api-access-dh5m7\") pod \"redhat-operators-rvxwx\" (UID: \"1d952026-d120-40ab-9ef5-bdd210e7961d\") " pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.372884 4893 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:46:35 crc kubenswrapper[4893]: I0128 15:46:35.837251 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvxwx"] Jan 28 15:46:36 crc kubenswrapper[4893]: I0128 15:46:36.649994 4893 generic.go:334] "Generic (PLEG): container finished" podID="1d952026-d120-40ab-9ef5-bdd210e7961d" containerID="573912f98279d1b6f5eb2105cfaa6d11d0243eb7a5d962b3b99df7fa2d127923" exitCode=0 Jan 28 15:46:36 crc kubenswrapper[4893]: I0128 15:46:36.650178 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvxwx" event={"ID":"1d952026-d120-40ab-9ef5-bdd210e7961d","Type":"ContainerDied","Data":"573912f98279d1b6f5eb2105cfaa6d11d0243eb7a5d962b3b99df7fa2d127923"} Jan 28 15:46:36 crc kubenswrapper[4893]: I0128 15:46:36.651410 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvxwx" event={"ID":"1d952026-d120-40ab-9ef5-bdd210e7961d","Type":"ContainerStarted","Data":"0387a6836f9e92b274e8eb914f3800f1687dd639baaab4155bdeb782195dc125"} Jan 28 15:46:46 crc kubenswrapper[4893]: I0128 15:46:46.732328 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvxwx" event={"ID":"1d952026-d120-40ab-9ef5-bdd210e7961d","Type":"ContainerStarted","Data":"6e2cf90ba88955302ea6224abac52077a4df10ea9ff0602a6a8f7e62a0fa6764"} Jan 28 15:46:50 crc kubenswrapper[4893]: I0128 15:46:50.760965 4893 generic.go:334] "Generic (PLEG): container finished" podID="1d952026-d120-40ab-9ef5-bdd210e7961d" containerID="6e2cf90ba88955302ea6224abac52077a4df10ea9ff0602a6a8f7e62a0fa6764" exitCode=0 Jan 28 15:46:50 crc kubenswrapper[4893]: I0128 15:46:50.761024 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvxwx" event={"ID":"1d952026-d120-40ab-9ef5-bdd210e7961d","Type":"ContainerDied","Data":"6e2cf90ba88955302ea6224abac52077a4df10ea9ff0602a6a8f7e62a0fa6764"} Jan 28 15:46:51 crc kubenswrapper[4893]: I0128 15:46:51.772332 4893 generic.go:334] "Generic (PLEG): container finished" podID="a70520a2-7db0-4c8e-b0e6-18d66c6d0e76" containerID="192d4d6094779e7e00ad31b96e2d3754d036acb7da7c0b24623424aa129f4866" exitCode=0 Jan 28 15:46:51 crc kubenswrapper[4893]: I0128 15:46:51.772435 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2b4d6/must-gather-wlf6l" event={"ID":"a70520a2-7db0-4c8e-b0e6-18d66c6d0e76","Type":"ContainerDied","Data":"192d4d6094779e7e00ad31b96e2d3754d036acb7da7c0b24623424aa129f4866"} Jan 28 15:46:51 crc kubenswrapper[4893]: I0128 15:46:51.773015 4893 scope.go:117] "RemoveContainer" containerID="192d4d6094779e7e00ad31b96e2d3754d036acb7da7c0b24623424aa129f4866" Jan 28 15:46:52 crc kubenswrapper[4893]: I0128 15:46:52.781958 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvxwx" event={"ID":"1d952026-d120-40ab-9ef5-bdd210e7961d","Type":"ContainerStarted","Data":"16c8876a9a96031c97f7acdab2968a29041aaaf339723ecb776dcf467dc819bc"} Jan 28 15:46:52 crc kubenswrapper[4893]: I0128 15:46:52.788145 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2b4d6_must-gather-wlf6l_a70520a2-7db0-4c8e-b0e6-18d66c6d0e76/gather/0.log" Jan 28 15:46:52 crc kubenswrapper[4893]: I0128 15:46:52.827249 4893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvxwx" podStartSLOduration=2.722341376 podStartE2EDuration="17.827229151s" podCreationTimestamp="2026-01-28 15:46:35 +0000 UTC" firstStartedPulling="2026-01-28 15:46:36.651846244 +0000 UTC m=+2714.425461272" lastFinishedPulling="2026-01-28 15:46:51.756734019 +0000 UTC m=+2729.530349047" observedRunningTime="2026-01-28 15:46:52.822728328 +0000 UTC m=+2730.596343366" watchObservedRunningTime="2026-01-28 15:46:52.827229151 +0000 UTC m=+2730.600844189" Jan 28 15:46:55 crc kubenswrapper[4893]: I0128 15:46:55.373327 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:46:55 crc kubenswrapper[4893]: I0128 15:46:55.373719 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:46:56 crc kubenswrapper[4893]: I0128 15:46:56.425677 4893 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rvxwx" podUID="1d952026-d120-40ab-9ef5-bdd210e7961d" containerName="registry-server" probeResult="failure" output=< Jan 28 15:46:56 crc kubenswrapper[4893]: timeout: failed to connect service ":50051" within 1s Jan 28 15:46:56 crc kubenswrapper[4893]: > Jan 28 15:47:01 crc kubenswrapper[4893]: I0128 15:47:01.471846 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2b4d6/must-gather-wlf6l"] Jan 28 15:47:01 crc kubenswrapper[4893]: I0128 15:47:01.472533 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2b4d6/must-gather-wlf6l" podUID="a70520a2-7db0-4c8e-b0e6-18d66c6d0e76" containerName="copy" containerID="cri-o://36c446eefdf48b09fc0dd34321adc2a2dad4517320e1038ccd137b74c9f34718" gracePeriod=2 Jan 28 15:47:01 crc kubenswrapper[4893]: I0128 15:47:01.477660 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2b4d6/must-gather-wlf6l"] Jan 28 15:47:01 crc kubenswrapper[4893]: I0128 15:47:01.856761 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2b4d6_must-gather-wlf6l_a70520a2-7db0-4c8e-b0e6-18d66c6d0e76/copy/0.log" Jan 28 15:47:01 crc kubenswrapper[4893]: I0128 15:47:01.857319 4893 generic.go:334] "Generic (PLEG): container finished" podID="a70520a2-7db0-4c8e-b0e6-18d66c6d0e76" containerID="36c446eefdf48b09fc0dd34321adc2a2dad4517320e1038ccd137b74c9f34718" exitCode=143 Jan 28 15:47:01 crc kubenswrapper[4893]: I0128 15:47:01.993391 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2b4d6_must-gather-wlf6l_a70520a2-7db0-4c8e-b0e6-18d66c6d0e76/copy/0.log" Jan 28 15:47:01 crc kubenswrapper[4893]: I0128 15:47:01.994118 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2b4d6/must-gather-wlf6l" Jan 28 15:47:02 crc kubenswrapper[4893]: I0128 15:47:02.055186 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmd8k\" (UniqueName: \"kubernetes.io/projected/a70520a2-7db0-4c8e-b0e6-18d66c6d0e76-kube-api-access-tmd8k\") pod \"a70520a2-7db0-4c8e-b0e6-18d66c6d0e76\" (UID: \"a70520a2-7db0-4c8e-b0e6-18d66c6d0e76\") " Jan 28 15:47:02 crc kubenswrapper[4893]: I0128 15:47:02.055563 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a70520a2-7db0-4c8e-b0e6-18d66c6d0e76-must-gather-output\") pod \"a70520a2-7db0-4c8e-b0e6-18d66c6d0e76\" (UID: \"a70520a2-7db0-4c8e-b0e6-18d66c6d0e76\") " Jan 28 15:47:02 crc kubenswrapper[4893]: I0128 15:47:02.062153 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a70520a2-7db0-4c8e-b0e6-18d66c6d0e76-kube-api-access-tmd8k" (OuterVolumeSpecName: "kube-api-access-tmd8k") pod "a70520a2-7db0-4c8e-b0e6-18d66c6d0e76" (UID: "a70520a2-7db0-4c8e-b0e6-18d66c6d0e76"). InnerVolumeSpecName "kube-api-access-tmd8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:47:02 crc kubenswrapper[4893]: I0128 15:47:02.147661 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a70520a2-7db0-4c8e-b0e6-18d66c6d0e76-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a70520a2-7db0-4c8e-b0e6-18d66c6d0e76" (UID: "a70520a2-7db0-4c8e-b0e6-18d66c6d0e76"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:47:02 crc kubenswrapper[4893]: I0128 15:47:02.161428 4893 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a70520a2-7db0-4c8e-b0e6-18d66c6d0e76-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 15:47:02 crc kubenswrapper[4893]: I0128 15:47:02.161516 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmd8k\" (UniqueName: \"kubernetes.io/projected/a70520a2-7db0-4c8e-b0e6-18d66c6d0e76-kube-api-access-tmd8k\") on node \"crc\" DevicePath \"\"" Jan 28 15:47:02 crc kubenswrapper[4893]: I0128 15:47:02.866495 4893 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2b4d6_must-gather-wlf6l_a70520a2-7db0-4c8e-b0e6-18d66c6d0e76/copy/0.log" Jan 28 15:47:02 crc kubenswrapper[4893]: I0128 15:47:02.867171 4893 scope.go:117] "RemoveContainer" containerID="36c446eefdf48b09fc0dd34321adc2a2dad4517320e1038ccd137b74c9f34718" Jan 28 15:47:02 crc kubenswrapper[4893]: I0128 15:47:02.867206 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2b4d6/must-gather-wlf6l" Jan 28 15:47:02 crc kubenswrapper[4893]: I0128 15:47:02.898546 4893 scope.go:117] "RemoveContainer" containerID="192d4d6094779e7e00ad31b96e2d3754d036acb7da7c0b24623424aa129f4866" Jan 28 15:47:02 crc kubenswrapper[4893]: I0128 15:47:02.906260 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a70520a2-7db0-4c8e-b0e6-18d66c6d0e76" path="/var/lib/kubelet/pods/a70520a2-7db0-4c8e-b0e6-18d66c6d0e76/volumes" Jan 28 15:47:05 crc kubenswrapper[4893]: I0128 15:47:05.414607 4893 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:47:05 crc kubenswrapper[4893]: I0128 15:47:05.460368 4893 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvxwx" Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.044198 4893 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvxwx"] Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.220696 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b7fw8"] Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.220983 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b7fw8" podUID="c21ef389-3376-4802-93c1-3115af586c8b" containerName="registry-server" containerID="cri-o://22f0532b966c060ac0ad858e068af00c39fd6a516bcc1dceb5bb8dedd613f315" gracePeriod=2 Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.752569 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.833764 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21ef389-3376-4802-93c1-3115af586c8b-catalog-content\") pod \"c21ef389-3376-4802-93c1-3115af586c8b\" (UID: \"c21ef389-3376-4802-93c1-3115af586c8b\") " Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.833869 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8p8s\" (UniqueName: \"kubernetes.io/projected/c21ef389-3376-4802-93c1-3115af586c8b-kube-api-access-x8p8s\") pod \"c21ef389-3376-4802-93c1-3115af586c8b\" (UID: \"c21ef389-3376-4802-93c1-3115af586c8b\") " Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.834004 4893 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21ef389-3376-4802-93c1-3115af586c8b-utilities\") pod \"c21ef389-3376-4802-93c1-3115af586c8b\" (UID: \"c21ef389-3376-4802-93c1-3115af586c8b\") " Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.834546 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21ef389-3376-4802-93c1-3115af586c8b-utilities" (OuterVolumeSpecName: "utilities") pod "c21ef389-3376-4802-93c1-3115af586c8b" (UID: "c21ef389-3376-4802-93c1-3115af586c8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.840706 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21ef389-3376-4802-93c1-3115af586c8b-kube-api-access-x8p8s" (OuterVolumeSpecName: "kube-api-access-x8p8s") pod "c21ef389-3376-4802-93c1-3115af586c8b" (UID: "c21ef389-3376-4802-93c1-3115af586c8b"). InnerVolumeSpecName "kube-api-access-x8p8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.897460 4893 generic.go:334] "Generic (PLEG): container finished" podID="c21ef389-3376-4802-93c1-3115af586c8b" containerID="22f0532b966c060ac0ad858e068af00c39fd6a516bcc1dceb5bb8dedd613f315" exitCode=0 Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.898334 4893 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7fw8" Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.925699 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7fw8" event={"ID":"c21ef389-3376-4802-93c1-3115af586c8b","Type":"ContainerDied","Data":"22f0532b966c060ac0ad858e068af00c39fd6a516bcc1dceb5bb8dedd613f315"} Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.925747 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7fw8" event={"ID":"c21ef389-3376-4802-93c1-3115af586c8b","Type":"ContainerDied","Data":"ad54dbcbd8842ab5fd8ece097bbff0065342c15556a0a5d725a0d895f614e948"} Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.925768 4893 scope.go:117] "RemoveContainer" containerID="22f0532b966c060ac0ad858e068af00c39fd6a516bcc1dceb5bb8dedd613f315" Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.937602 4893 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21ef389-3376-4802-93c1-3115af586c8b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.937637 4893 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8p8s\" (UniqueName: \"kubernetes.io/projected/c21ef389-3376-4802-93c1-3115af586c8b-kube-api-access-x8p8s\") on node \"crc\" DevicePath \"\"" Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.952523 4893 scope.go:117] "RemoveContainer" containerID="fb0dd1332ab28cee0703c174e48d7a742baac191cdbf33ee3a2169f6b2ec7228" Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.954000 4893 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21ef389-3376-4802-93c1-3115af586c8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c21ef389-3376-4802-93c1-3115af586c8b" (UID: "c21ef389-3376-4802-93c1-3115af586c8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:47:06 crc kubenswrapper[4893]: I0128 15:47:06.983693 4893 scope.go:117] "RemoveContainer" containerID="d05dee5dbc97a443b912721dd8670ea95da5b05036c49906b69866ca727c638c" Jan 28 15:47:07 crc kubenswrapper[4893]: I0128 15:47:07.006317 4893 scope.go:117] "RemoveContainer" containerID="22f0532b966c060ac0ad858e068af00c39fd6a516bcc1dceb5bb8dedd613f315" Jan 28 15:47:07 crc kubenswrapper[4893]: E0128 15:47:07.006837 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22f0532b966c060ac0ad858e068af00c39fd6a516bcc1dceb5bb8dedd613f315\": container with ID starting with 22f0532b966c060ac0ad858e068af00c39fd6a516bcc1dceb5bb8dedd613f315 not found: ID does not exist" containerID="22f0532b966c060ac0ad858e068af00c39fd6a516bcc1dceb5bb8dedd613f315" Jan 28 15:47:07 crc kubenswrapper[4893]: I0128 15:47:07.006908 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f0532b966c060ac0ad858e068af00c39fd6a516bcc1dceb5bb8dedd613f315"} err="failed to get container status \"22f0532b966c060ac0ad858e068af00c39fd6a516bcc1dceb5bb8dedd613f315\": rpc error: code = NotFound desc = could not find container \"22f0532b966c060ac0ad858e068af00c39fd6a516bcc1dceb5bb8dedd613f315\": container with ID starting with 22f0532b966c060ac0ad858e068af00c39fd6a516bcc1dceb5bb8dedd613f315 not found: ID does not exist" Jan 28 15:47:07 crc kubenswrapper[4893]: I0128 15:47:07.006945 4893 scope.go:117] "RemoveContainer" containerID="fb0dd1332ab28cee0703c174e48d7a742baac191cdbf33ee3a2169f6b2ec7228" Jan 28 15:47:07 crc kubenswrapper[4893]: E0128 15:47:07.007483 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb0dd1332ab28cee0703c174e48d7a742baac191cdbf33ee3a2169f6b2ec7228\": container with ID starting with fb0dd1332ab28cee0703c174e48d7a742baac191cdbf33ee3a2169f6b2ec7228 not found: ID does not exist" containerID="fb0dd1332ab28cee0703c174e48d7a742baac191cdbf33ee3a2169f6b2ec7228" Jan 28 15:47:07 crc kubenswrapper[4893]: I0128 15:47:07.007522 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb0dd1332ab28cee0703c174e48d7a742baac191cdbf33ee3a2169f6b2ec7228"} err="failed to get container status \"fb0dd1332ab28cee0703c174e48d7a742baac191cdbf33ee3a2169f6b2ec7228\": rpc error: code = NotFound desc = could not find container \"fb0dd1332ab28cee0703c174e48d7a742baac191cdbf33ee3a2169f6b2ec7228\": container with ID starting with fb0dd1332ab28cee0703c174e48d7a742baac191cdbf33ee3a2169f6b2ec7228 not found: ID does not exist" Jan 28 15:47:07 crc kubenswrapper[4893]: I0128 15:47:07.007544 4893 scope.go:117] "RemoveContainer" containerID="d05dee5dbc97a443b912721dd8670ea95da5b05036c49906b69866ca727c638c" Jan 28 15:47:07 crc kubenswrapper[4893]: E0128 15:47:07.007834 4893 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d05dee5dbc97a443b912721dd8670ea95da5b05036c49906b69866ca727c638c\": container with ID starting with d05dee5dbc97a443b912721dd8670ea95da5b05036c49906b69866ca727c638c not found: ID does not exist" containerID="d05dee5dbc97a443b912721dd8670ea95da5b05036c49906b69866ca727c638c" Jan 28 15:47:07 crc kubenswrapper[4893]: I0128 15:47:07.007864 4893 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d05dee5dbc97a443b912721dd8670ea95da5b05036c49906b69866ca727c638c"} err="failed to get container status \"d05dee5dbc97a443b912721dd8670ea95da5b05036c49906b69866ca727c638c\": rpc error: code = NotFound desc = could not find container \"d05dee5dbc97a443b912721dd8670ea95da5b05036c49906b69866ca727c638c\": container with ID starting with d05dee5dbc97a443b912721dd8670ea95da5b05036c49906b69866ca727c638c not found: ID does not exist" Jan 28 15:47:07 crc kubenswrapper[4893]: I0128 15:47:07.039408 4893 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21ef389-3376-4802-93c1-3115af586c8b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:47:07 crc kubenswrapper[4893]: I0128 15:47:07.229047 4893 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b7fw8"] Jan 28 15:47:07 crc kubenswrapper[4893]: I0128 15:47:07.235109 4893 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b7fw8"] Jan 28 15:47:08 crc kubenswrapper[4893]: I0128 15:47:08.901357 4893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21ef389-3376-4802-93c1-3115af586c8b" path="/var/lib/kubelet/pods/c21ef389-3376-4802-93c1-3115af586c8b/volumes" Jan 28 15:48:35 crc kubenswrapper[4893]: I0128 15:48:35.722368 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:48:35 crc kubenswrapper[4893]: I0128 15:48:35.722837 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:49:05 crc kubenswrapper[4893]: I0128 15:49:05.722696 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:49:05 crc kubenswrapper[4893]: I0128 15:49:05.723306 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:49:35 crc kubenswrapper[4893]: I0128 15:49:35.722061 4893 patch_prober.go:28] interesting pod/machine-config-daemon-l2nht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:49:35 crc kubenswrapper[4893]: I0128 15:49:35.722714 4893 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:49:35 crc kubenswrapper[4893]: I0128 15:49:35.722782 4893 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" Jan 28 15:49:35 crc kubenswrapper[4893]: I0128 15:49:35.723505 4893 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d3e643fbe9f36ca2c6a3682b66faf889a0fdc3126ace24f8151d5686981c487"} pod="openshift-machine-config-operator/machine-config-daemon-l2nht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:49:35 crc kubenswrapper[4893]: I0128 15:49:35.723596 4893 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" podUID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerName="machine-config-daemon" containerID="cri-o://4d3e643fbe9f36ca2c6a3682b66faf889a0fdc3126ace24f8151d5686981c487" gracePeriod=600 Jan 28 15:49:35 crc kubenswrapper[4893]: I0128 15:49:35.966167 4893 generic.go:334] "Generic (PLEG): container finished" podID="b2ddd967-f9a8-464a-95de-512c9c5874fd" containerID="4d3e643fbe9f36ca2c6a3682b66faf889a0fdc3126ace24f8151d5686981c487" exitCode=0 Jan 28 15:49:35 crc kubenswrapper[4893]: I0128 15:49:35.966244 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerDied","Data":"4d3e643fbe9f36ca2c6a3682b66faf889a0fdc3126ace24f8151d5686981c487"} Jan 28 15:49:35 crc kubenswrapper[4893]: I0128 15:49:35.966402 4893 scope.go:117] "RemoveContainer" containerID="cc8ade766da99535c6500c3f18515796a01136718af2f3cc371eb42de857ba01" Jan 28 15:49:36 crc kubenswrapper[4893]: I0128 15:49:36.974629 4893 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l2nht" event={"ID":"b2ddd967-f9a8-464a-95de-512c9c5874fd","Type":"ContainerStarted","Data":"42d91d02a3248183bd54b6ee01be8f7222c3f8b6ee16b725f9ceb2fc21d94c20"}